I1227 10:47:26.145552 8 e2e.go:224] Starting e2e run "44e007f2-2896-11ea-bad5-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577443645 - Will randomize all specs Will run 201 of 2164 specs Dec 27 10:47:26.451: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:47:26.454: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 27 10:47:26.486: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 27 10:47:26.561: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 27 10:47:26.561: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 27 10:47:26.561: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 27 10:47:26.590: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 27 10:47:26.590: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 27 10:47:26.590: INFO: e2e test version: v1.13.12 Dec 27 10:47:26.593: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:47:26.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Dec 27 10:47:26.784: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 27 10:47:37.428: INFO: Successfully updated pod "labelsupdate45b43de8-2896-11ea-bad5-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:47:39.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vwxkj" for this suite. Dec 27 10:48:03.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:48:03.690: INFO: namespace: e2e-tests-projected-vwxkj, resource: bindings, ignored listing per whitelist Dec 27 10:48:03.792: INFO: namespace e2e-tests-projected-vwxkj deletion completed in 24.24594769s • [SLOW TEST:37.199 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:48:03.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Dec 27 10:48:04.212: INFO: namespace e2e-tests-kubectl-95vhk Dec 27 10:48:04.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-95vhk' Dec 27 10:48:06.579: INFO: stderr: "" Dec 27 10:48:06.579: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 27 10:48:07.602: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:07.602: INFO: Found 0 / 1 Dec 27 10:48:08.778: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:08.778: INFO: Found 0 / 1 Dec 27 10:48:09.595: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:09.595: INFO: Found 0 / 1 Dec 27 10:48:10.614: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:10.614: INFO: Found 0 / 1 Dec 27 10:48:11.599: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:11.599: INFO: Found 0 / 1 Dec 27 10:48:12.622: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:12.622: INFO: Found 0 / 1 Dec 27 10:48:13.847: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:13.847: INFO: Found 0 / 1 Dec 27 10:48:14.644: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:14.644: INFO: Found 0 / 1 Dec 27 10:48:15.588: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:15.588: INFO: Found 0 / 1 Dec 27 10:48:16.642: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:16.642: INFO: Found 0 / 1 Dec 27 10:48:17.597: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:17.598: INFO: Found 1 / 1 Dec 27 10:48:17.598: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 27 10:48:17.608: INFO: Selector matched 1 pods for map[app:redis] Dec 27 10:48:17.608: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 27 10:48:17.608: INFO: wait on redis-master startup in e2e-tests-kubectl-95vhk Dec 27 10:48:17.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zgh42 redis-master --namespace=e2e-tests-kubectl-95vhk' Dec 27 10:48:17.865: INFO: stderr: "" Dec 27 10:48:17.865: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Dec 10:48:15.402 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Dec 10:48:15.402 # Server started, Redis version 3.2.12\n1:M 27 Dec 10:48:15.402 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Dec 10:48:15.402 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Dec 27 10:48:17.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-95vhk' Dec 27 10:48:18.094: INFO: stderr: "" Dec 27 10:48:18.094: INFO: stdout: "service/rm2 exposed\n" Dec 27 10:48:18.164: INFO: Service rm2 in namespace e2e-tests-kubectl-95vhk found. STEP: exposing service Dec 27 10:48:20.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-95vhk' Dec 27 10:48:20.505: INFO: stderr: "" Dec 27 10:48:20.505: INFO: stdout: "service/rm3 exposed\n" Dec 27 10:48:20.651: INFO: Service rm3 in namespace e2e-tests-kubectl-95vhk found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:48:22.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-95vhk" for this suite. Dec 27 10:48:46.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:48:46.967: INFO: namespace: e2e-tests-kubectl-95vhk, resource: bindings, ignored listing per whitelist Dec 27 10:48:47.011: INFO: namespace e2e-tests-kubectl-95vhk deletion completed in 24.334678469s • [SLOW TEST:43.218 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:48:47.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1227 10:48:57.633979 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 27 10:48:57.634: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:48:57.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-89m2r" for this suite. Dec 27 10:49:04.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:49:04.290: INFO: namespace: e2e-tests-gc-89m2r, resource: bindings, ignored listing per whitelist Dec 27 10:49:04.339: INFO: namespace e2e-tests-gc-89m2r deletion completed in 6.70080097s • [SLOW TEST:17.328 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:49:04.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-800cfdf0-2896-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 10:49:04.828: INFO: Waiting up to 5m0s for pod "pod-secrets-80200645-2896-11ea-bad5-0242ac110005" in namespace "e2e-tests-secrets-trpt7" to be "success or failure" Dec 27 10:49:04.838: INFO: Pod "pod-secrets-80200645-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.693737ms Dec 27 10:49:07.205: INFO: Pod "pod-secrets-80200645-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376925778s Dec 27 10:49:09.217: INFO: Pod "pod-secrets-80200645-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388773203s Dec 27 10:49:11.228: INFO: Pod "pod-secrets-80200645-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400460936s Dec 27 10:49:13.369: INFO: Pod "pod-secrets-80200645-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541347796s Dec 27 10:49:15.382: INFO: Pod "pod-secrets-80200645-2896-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.554465306s STEP: Saw pod success Dec 27 10:49:15.382: INFO: Pod "pod-secrets-80200645-2896-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:49:15.386: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-80200645-2896-11ea-bad5-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 27 10:49:16.089: INFO: Waiting for pod pod-secrets-80200645-2896-11ea-bad5-0242ac110005 to disappear Dec 27 10:49:16.125: INFO: Pod pod-secrets-80200645-2896-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:49:16.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-trpt7" for this suite. Dec 27 10:49:22.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:49:22.309: INFO: namespace: e2e-tests-secrets-trpt7, resource: bindings, ignored listing per whitelist Dec 27 10:49:22.450: INFO: namespace e2e-tests-secrets-trpt7 deletion completed in 6.308552284s • [SLOW TEST:18.110 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:49:22.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Dec 27 10:49:22.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:23.013: INFO: stderr: "" Dec 27 10:49:23.013: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 27 10:49:23.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:23.310: INFO: stderr: "" Dec 27 10:49:23.310: INFO: stdout: "update-demo-nautilus-7bm9x update-demo-nautilus-wxgrf " Dec 27 10:49:23.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bm9x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:23.446: INFO: stderr: "" Dec 27 10:49:23.446: INFO: stdout: "" Dec 27 10:49:23.446: INFO: update-demo-nautilus-7bm9x is created but not running Dec 27 10:49:28.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:28.635: INFO: stderr: "" Dec 27 10:49:28.636: INFO: stdout: "update-demo-nautilus-7bm9x update-demo-nautilus-wxgrf " Dec 27 10:49:28.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bm9x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:28.971: INFO: stderr: "" Dec 27 10:49:28.972: INFO: stdout: "" Dec 27 10:49:28.972: INFO: update-demo-nautilus-7bm9x is created but not running Dec 27 10:49:33.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:34.263: INFO: stderr: "" Dec 27 10:49:34.263: INFO: stdout: "update-demo-nautilus-7bm9x update-demo-nautilus-wxgrf " Dec 27 10:49:34.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bm9x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:34.427: INFO: stderr: "" Dec 27 10:49:34.427: INFO: stdout: "true" Dec 27 10:49:34.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bm9x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:34.601: INFO: stderr: "" Dec 27 10:49:34.601: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 10:49:34.601: INFO: validating pod update-demo-nautilus-7bm9x Dec 27 10:49:34.707: INFO: got data: { "image": "nautilus.jpg" } Dec 27 10:49:34.707: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 10:49:34.707: INFO: update-demo-nautilus-7bm9x is verified up and running Dec 27 10:49:34.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wxgrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:34.807: INFO: stderr: "" Dec 27 10:49:34.807: INFO: stdout: "true" Dec 27 10:49:34.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wxgrf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:49:34.913: INFO: stderr: "" Dec 27 10:49:34.913: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 10:49:34.913: INFO: validating pod update-demo-nautilus-wxgrf Dec 27 10:49:34.933: INFO: got data: { "image": "nautilus.jpg" } Dec 27 10:49:34.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 10:49:34.933: INFO: update-demo-nautilus-wxgrf is verified up and running STEP: rolling-update to new replication controller Dec 27 10:49:34.951: INFO: scanned /root for discovery docs: Dec 27 10:49:34.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:50:07.159: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 27 10:50:07.159: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 27 10:50:07.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:50:07.304: INFO: stderr: "" Dec 27 10:50:07.304: INFO: stdout: "update-demo-kitten-hh6kv update-demo-kitten-sxn2g update-demo-nautilus-7bm9x " STEP: Replicas for name=update-demo: expected=2 actual=3 Dec 27 10:50:12.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:50:12.577: INFO: stderr: "" Dec 27 10:50:12.577: INFO: stdout: "update-demo-kitten-hh6kv update-demo-kitten-sxn2g update-demo-nautilus-7bm9x " STEP: Replicas for name=update-demo: expected=2 actual=3 Dec 27 10:50:17.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:50:17.769: INFO: stderr: "" Dec 27 10:50:17.769: INFO: stdout: "update-demo-kitten-hh6kv update-demo-kitten-sxn2g " Dec 27 10:50:17.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hh6kv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:50:17.900: INFO: stderr: "" Dec 27 10:50:17.900: INFO: stdout: "true" Dec 27 10:50:17.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hh6kv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:50:18.055: INFO: stderr: "" Dec 27 10:50:18.055: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 27 10:50:18.055: INFO: validating pod update-demo-kitten-hh6kv Dec 27 10:50:18.099: INFO: got data: { "image": "kitten.jpg" } Dec 27 10:50:18.099: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 27 10:50:18.099: INFO: update-demo-kitten-hh6kv is verified up and running Dec 27 10:50:18.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sxn2g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:50:18.210: INFO: stderr: "" Dec 27 10:50:18.210: INFO: stdout: "true" Dec 27 10:50:18.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sxn2g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c8fdz' Dec 27 10:50:18.315: INFO: stderr: "" Dec 27 10:50:18.316: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 27 10:50:18.316: INFO: validating pod update-demo-kitten-sxn2g Dec 27 10:50:18.325: INFO: got data: { "image": "kitten.jpg" } Dec 27 10:50:18.325: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 27 10:50:18.325: INFO: update-demo-kitten-sxn2g is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:50:18.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c8fdz" for this suite. Dec 27 10:50:44.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:50:44.451: INFO: namespace: e2e-tests-kubectl-c8fdz, resource: bindings, ignored listing per whitelist Dec 27 10:50:44.598: INFO: namespace e2e-tests-kubectl-c8fdz deletion completed in 26.267783452s • [SLOW TEST:82.148 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:50:44.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 10:50:44.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-dh58z" to be "success or failure" Dec 27 10:50:44.906: INFO: Pod "downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.279303ms Dec 27 10:50:47.172: INFO: Pod "downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327341216s Dec 27 10:50:49.282: INFO: Pod "downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437541248s Dec 27 10:50:51.469: INFO: Pod "downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.624154429s Dec 27 10:50:53.490: INFO: Pod "downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.645290445s Dec 27 10:50:55.501: INFO: Pod "downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.656272901s STEP: Saw pod success Dec 27 10:50:55.501: INFO: Pod "downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:50:55.504: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 10:50:55.550: INFO: Waiting for pod downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005 to disappear Dec 27 10:50:55.570: INFO: Pod downwardapi-volume-bbbfb813-2896-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:50:55.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dh58z" for this suite. Dec 27 10:51:01.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:51:01.922: INFO: namespace: e2e-tests-projected-dh58z, resource: bindings, ignored listing per whitelist Dec 27 10:51:02.040: INFO: namespace e2e-tests-projected-dh58z deletion completed in 6.359919894s • [SLOW TEST:17.443 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:51:02.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 27 10:51:02.352: INFO: Waiting up to 5m0s for pod "pod-c61b2ec4-2896-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-4xgpw" to be "success or failure" Dec 27 10:51:02.364: INFO: Pod "pod-c61b2ec4-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.060894ms Dec 27 10:51:04.374: INFO: Pod "pod-c61b2ec4-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022037752s Dec 27 10:51:06.385: INFO: Pod "pod-c61b2ec4-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03311713s Dec 27 10:51:08.880: INFO: Pod "pod-c61b2ec4-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.528078279s Dec 27 10:51:10.891: INFO: Pod "pod-c61b2ec4-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539238914s Dec 27 10:51:12.907: INFO: Pod "pod-c61b2ec4-2896-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.555538178s STEP: Saw pod success Dec 27 10:51:12.907: INFO: Pod "pod-c61b2ec4-2896-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:51:12.911: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c61b2ec4-2896-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 10:51:13.059: INFO: Waiting for pod pod-c61b2ec4-2896-11ea-bad5-0242ac110005 to disappear Dec 27 10:51:13.068: INFO: Pod pod-c61b2ec4-2896-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:51:13.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4xgpw" for this suite. Dec 27 10:51:21.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:51:21.176: INFO: namespace: e2e-tests-emptydir-4xgpw, resource: bindings, ignored listing per whitelist Dec 27 10:51:21.279: INFO: namespace e2e-tests-emptydir-4xgpw deletion completed in 8.202525709s • [SLOW TEST:19.239 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:51:21.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 10:51:21.515: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-bzjvv" to be "success or failure" Dec 27 10:51:21.530: INFO: Pod "downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.663703ms Dec 27 10:51:23.582: INFO: Pod "downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066132972s Dec 27 10:51:25.596: INFO: Pod "downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080124888s Dec 27 10:51:27.767: INFO: Pod "downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251632679s Dec 27 10:51:29.781: INFO: Pod "downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265513286s Dec 27 10:51:31.799: INFO: Pod "downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.283642015s STEP: Saw pod success Dec 27 10:51:31.800: INFO: Pod "downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:51:31.806: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 10:51:31.931: INFO: Waiting for pod downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005 to disappear Dec 27 10:51:31.946: INFO: Pod downwardapi-volume-d199fb27-2896-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:51:31.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bzjvv" for this suite. Dec 27 10:51:38.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:51:38.891: INFO: namespace: e2e-tests-downward-api-bzjvv, resource: bindings, ignored listing per whitelist Dec 27 10:51:38.910: INFO: namespace e2e-tests-downward-api-bzjvv deletion completed in 6.607223644s • [SLOW TEST:17.630 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:51:38.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 27 10:52:05.421: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:05.421: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:05.857: INFO: Exec stderr: "" Dec 27 10:52:05.857: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:05.857: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:06.333: INFO: Exec stderr: "" Dec 27 10:52:06.333: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:06.333: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:06.724: INFO: Exec stderr: "" Dec 27 10:52:06.724: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:06.724: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:07.144: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 27 10:52:07.144: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:07.144: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:07.487: INFO: Exec stderr: "" Dec 27 10:52:07.487: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:07.487: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:07.826: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 27 10:52:07.826: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:07.826: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:08.248: INFO: Exec stderr: "" Dec 27 10:52:08.248: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:08.248: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:08.946: INFO: Exec stderr: "" Dec 27 10:52:08.946: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:08.946: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:09.331: INFO: Exec stderr: "" Dec 27 10:52:09.331: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w7jgf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 10:52:09.331: INFO: >>> kubeConfig: /root/.kube/config Dec 27 10:52:09.634: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:52:09.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-w7jgf" for this suite. Dec 27 10:53:05.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:53:05.835: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-w7jgf, resource: bindings, ignored listing per whitelist Dec 27 10:53:05.847: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-w7jgf deletion completed in 56.202298482s • [SLOW TEST:86.937 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:53:05.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 10:53:06.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-hr5lq" to be "success or failure" Dec 27 10:53:06.206: INFO: Pod "downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.771925ms Dec 27 10:53:08.216: INFO: Pod "downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038821458s Dec 27 10:53:10.234: INFO: Pod "downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056989556s Dec 27 10:53:12.503: INFO: Pod "downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.325785133s Dec 27 10:53:14.552: INFO: Pod "downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.374734509s Dec 27 10:53:16.574: INFO: Pod "downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.396419454s STEP: Saw pod success Dec 27 10:53:16.574: INFO: Pod "downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:53:16.583: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 10:53:16.797: INFO: Waiting for pod downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005 to disappear Dec 27 10:53:16.813: INFO: Pod downwardapi-volume-0ffb9c89-2897-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:53:16.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hr5lq" for this suite. Dec 27 10:53:22.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:53:22.940: INFO: namespace: e2e-tests-downward-api-hr5lq, resource: bindings, ignored listing per whitelist Dec 27 10:53:23.081: INFO: namespace e2e-tests-downward-api-hr5lq deletion completed in 6.261391772s • [SLOW TEST:17.234 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:53:23.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 10:53:23.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-rgkkh" to be "success or failure" Dec 27 10:53:23.350: INFO: Pod "downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.564766ms Dec 27 10:53:25.650: INFO: Pod "downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307082781s Dec 27 10:53:27.686: INFO: Pod "downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343176358s Dec 27 10:53:29.722: INFO: Pod "downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378515483s Dec 27 10:53:31.740: INFO: Pod "downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396520941s Dec 27 10:53:33.755: INFO: Pod "downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.411527153s STEP: Saw pod success Dec 27 10:53:33.755: INFO: Pod "downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:53:33.760: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 10:53:34.068: INFO: Waiting for pod downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005 to disappear Dec 27 10:53:34.082: INFO: Pod downwardapi-volume-1a2edc01-2897-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:53:34.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rgkkh" for this suite. Dec 27 10:53:40.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:53:40.184: INFO: namespace: e2e-tests-projected-rgkkh, resource: bindings, ignored listing per whitelist Dec 27 10:53:40.294: INFO: namespace e2e-tests-projected-rgkkh deletion completed in 6.20641134s • [SLOW TEST:17.213 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:53:40.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-twdwm/secret-test-2494891a-2897-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 10:53:40.751: INFO: Waiting up to 5m0s for pod "pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005" in namespace "e2e-tests-secrets-twdwm" to be "success or failure" Dec 27 10:53:40.775: INFO: Pod "pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.872736ms Dec 27 10:53:42.790: INFO: Pod "pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038417987s Dec 27 10:53:44.805: INFO: Pod "pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053689089s Dec 27 10:53:46.934: INFO: Pod "pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182308294s Dec 27 10:53:48.988: INFO: Pod "pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.237104865s STEP: Saw pod success Dec 27 10:53:48.988: INFO: Pod "pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:53:48.996: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005 container env-test: STEP: delete the pod Dec 27 10:53:49.127: INFO: Waiting for pod pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005 to disappear Dec 27 10:53:49.146: INFO: Pod pod-configmaps-2495772d-2897-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:53:49.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-twdwm" for this suite. Dec 27 10:53:55.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:53:55.276: INFO: namespace: e2e-tests-secrets-twdwm, resource: bindings, ignored listing per whitelist Dec 27 10:53:55.416: INFO: namespace e2e-tests-secrets-twdwm deletion completed in 6.264864582s • [SLOW TEST:15.121 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:53:55.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Dec 27 10:53:55.670: INFO: Waiting up to 5m0s for pod "var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005" in namespace "e2e-tests-var-expansion-pr952" to be "success or failure" Dec 27 10:53:55.675: INFO: Pod "var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.490949ms Dec 27 10:53:57.857: INFO: Pod "var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186919148s Dec 27 10:53:59.892: INFO: Pod "var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221970716s Dec 27 10:54:02.093: INFO: Pod "var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423359854s Dec 27 10:54:04.117: INFO: Pod "var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446941914s Dec 27 10:54:06.134: INFO: Pod "var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.463929044s Dec 27 10:54:08.189: INFO: Pod "var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.519162951s STEP: Saw pod success Dec 27 10:54:08.189: INFO: Pod "var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:54:08.200: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005 container dapi-container: STEP: delete the pod Dec 27 10:54:08.382: INFO: Waiting for pod var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005 to disappear Dec 27 10:54:08.417: INFO: Pod var-expansion-2d7ac885-2897-11ea-bad5-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:54:08.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-pr952" for this suite. Dec 27 10:54:14.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:54:14.576: INFO: namespace: e2e-tests-var-expansion-pr952, resource: bindings, ignored listing per whitelist Dec 27 10:54:14.681: INFO: namespace e2e-tests-var-expansion-pr952 deletion completed in 6.25371147s • [SLOW TEST:19.265 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:54:14.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-38ec9c89-2897-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 10:54:15.114: INFO: Waiting up to 5m0s for pod "pod-secrets-391355bc-2897-11ea-bad5-0242ac110005" in namespace "e2e-tests-secrets-fjk7n" to be "success or failure" Dec 27 10:54:15.187: INFO: Pod "pod-secrets-391355bc-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 73.034501ms Dec 27 10:54:17.412: INFO: Pod "pod-secrets-391355bc-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297625053s Dec 27 10:54:19.424: INFO: Pod "pod-secrets-391355bc-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310545822s Dec 27 10:54:21.453: INFO: Pod "pod-secrets-391355bc-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.33925356s Dec 27 10:54:23.654: INFO: Pod "pod-secrets-391355bc-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540090643s Dec 27 10:54:25.671: INFO: Pod "pod-secrets-391355bc-2897-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.556662532s STEP: Saw pod success Dec 27 10:54:25.671: INFO: Pod "pod-secrets-391355bc-2897-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:54:25.675: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-391355bc-2897-11ea-bad5-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 27 10:54:26.281: INFO: Waiting for pod pod-secrets-391355bc-2897-11ea-bad5-0242ac110005 to disappear Dec 27 10:54:26.305: INFO: Pod pod-secrets-391355bc-2897-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:54:26.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fjk7n" for this suite. Dec 27 10:54:32.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:54:32.827: INFO: namespace: e2e-tests-secrets-fjk7n, resource: bindings, ignored listing per whitelist Dec 27 10:54:32.890: INFO: namespace e2e-tests-secrets-fjk7n deletion completed in 6.56110775s STEP: Destroying namespace "e2e-tests-secret-namespace-24d5n" for this suite. Dec 27 10:54:38.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:54:39.083: INFO: namespace: e2e-tests-secret-namespace-24d5n, resource: bindings, ignored listing per whitelist Dec 27 10:54:39.111: INFO: namespace e2e-tests-secret-namespace-24d5n deletion completed in 6.221584734s • [SLOW TEST:24.430 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:54:39.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 10:54:39.280: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:54:49.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-b9gjr" for this suite. Dec 27 10:55:33.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:55:33.958: INFO: namespace: e2e-tests-pods-b9gjr, resource: bindings, ignored listing per whitelist Dec 27 10:55:34.060: INFO: namespace e2e-tests-pods-b9gjr deletion completed in 44.385098898s • [SLOW TEST:54.948 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:55:34.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-685dfbd2-2897-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 10:55:34.484: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-8x2jr" to be "success or failure" Dec 27 10:55:34.559: INFO: Pod "pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.506404ms Dec 27 10:55:36.603: INFO: Pod "pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11884238s Dec 27 10:55:38.619: INFO: Pod "pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134740099s Dec 27 10:55:40.754: INFO: Pod "pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.269981639s Dec 27 10:55:42.781: INFO: Pod "pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.296652782s Dec 27 10:55:44.793: INFO: Pod "pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.308864089s STEP: Saw pod success Dec 27 10:55:44.793: INFO: Pod "pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:55:44.799: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 27 10:55:44.837: INFO: Waiting for pod pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005 to disappear Dec 27 10:55:44.853: INFO: Pod pod-projected-secrets-685f8a0e-2897-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:55:44.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8x2jr" for this suite. Dec 27 10:55:51.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:55:51.219: INFO: namespace: e2e-tests-projected-8x2jr, resource: bindings, ignored listing per whitelist Dec 27 10:55:51.295: INFO: namespace e2e-tests-projected-8x2jr deletion completed in 6.374149583s • [SLOW TEST:17.235 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:55:51.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 27 10:55:51.540: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x5dpc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x5dpc/configmaps/e2e-watch-test-label-changed,UID:728b71f8-2897-11ea-a994-fa163e34d433,ResourceVersion:16225647,Generation:0,CreationTimestamp:2019-12-27 10:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 27 10:55:51.540: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x5dpc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x5dpc/configmaps/e2e-watch-test-label-changed,UID:728b71f8-2897-11ea-a994-fa163e34d433,ResourceVersion:16225648,Generation:0,CreationTimestamp:2019-12-27 10:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 27 10:55:51.540: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x5dpc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x5dpc/configmaps/e2e-watch-test-label-changed,UID:728b71f8-2897-11ea-a994-fa163e34d433,ResourceVersion:16225649,Generation:0,CreationTimestamp:2019-12-27 10:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 27 10:56:01.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x5dpc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x5dpc/configmaps/e2e-watch-test-label-changed,UID:728b71f8-2897-11ea-a994-fa163e34d433,ResourceVersion:16225663,Generation:0,CreationTimestamp:2019-12-27 10:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 27 10:56:01.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x5dpc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x5dpc/configmaps/e2e-watch-test-label-changed,UID:728b71f8-2897-11ea-a994-fa163e34d433,ResourceVersion:16225664,Generation:0,CreationTimestamp:2019-12-27 10:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 27 10:56:01.651: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x5dpc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x5dpc/configmaps/e2e-watch-test-label-changed,UID:728b71f8-2897-11ea-a994-fa163e34d433,ResourceVersion:16225665,Generation:0,CreationTimestamp:2019-12-27 10:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:56:01.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-x5dpc" for this suite. Dec 27 10:56:07.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:56:07.841: INFO: namespace: e2e-tests-watch-x5dpc, resource: bindings, ignored listing per whitelist Dec 27 10:56:07.917: INFO: namespace e2e-tests-watch-x5dpc deletion completed in 6.257960098s • [SLOW TEST:16.622 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:56:07.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Dec 27 10:56:08.546: INFO: Waiting up to 5m0s for pod "var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005" in namespace "e2e-tests-var-expansion-qj4jv" to be "success or failure" Dec 27 10:56:08.679: INFO: Pod "var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 133.259321ms Dec 27 10:56:10.695: INFO: Pod "var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149347071s Dec 27 10:56:12.714: INFO: Pod "var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167822383s Dec 27 10:56:14.944: INFO: Pod "var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398709318s Dec 27 10:56:16.953: INFO: Pod "var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.40693914s Dec 27 10:56:18.973: INFO: Pod "var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.42721879s Dec 27 10:56:21.173: INFO: Pod "var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.627435749s STEP: Saw pod success Dec 27 10:56:21.173: INFO: Pod "var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:56:21.187: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005 container dapi-container: STEP: delete the pod Dec 27 10:56:21.327: INFO: Waiting for pod var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005 to disappear Dec 27 10:56:21.357: INFO: Pod var-expansion-7caa3b6a-2897-11ea-bad5-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:56:21.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-qj4jv" for this suite. Dec 27 10:56:27.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:56:27.543: INFO: namespace: e2e-tests-var-expansion-qj4jv, resource: bindings, ignored listing per whitelist Dec 27 10:56:27.563: INFO: namespace e2e-tests-var-expansion-qj4jv deletion completed in 6.198371494s • [SLOW TEST:19.646 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:56:27.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 10:56:27.876: INFO: Creating deployment "nginx-deployment" Dec 27 10:56:27.885: INFO: Waiting for observed generation 1 Dec 27 10:56:30.292: INFO: Waiting for all required pods to come up Dec 27 10:56:30.308: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 27 10:57:09.067: INFO: Waiting for deployment "nginx-deployment" to complete Dec 27 10:57:09.079: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 27 10:57:09.095: INFO: Updating deployment nginx-deployment Dec 27 10:57:09.095: INFO: Waiting for observed generation 2 Dec 27 10:57:12.929: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 27 10:57:13.752: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 27 10:57:13.760: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 27 10:57:15.359: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 27 10:57:15.359: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 27 10:57:16.195: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 27 10:57:17.845: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 27 10:57:17.845: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 27 10:57:18.612: INFO: Updating deployment nginx-deployment Dec 27 10:57:18.612: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 27 10:57:19.151: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 27 10:57:19.576: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 27 10:57:19.708: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t6nfh/deployments/nginx-deployment,UID:8838663f-2897-11ea-a994-fa163e34d433,ResourceVersion:16225957,Generation:3,CreationTimestamp:2019-12-27 10:56:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-27 10:57:13 +0000 UTC 2019-12-27 10:56:28 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-27 10:57:19 +0000 UTC 2019-12-27 10:57:19 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 27 10:57:19.952: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t6nfh/replicasets/nginx-deployment-5c98f8fb5,UID:a0cacbff-2897-11ea-a994-fa163e34d433,ResourceVersion:16225953,Generation:3,CreationTimestamp:2019-12-27 10:57:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8838663f-2897-11ea-a994-fa163e34d433 0xc001ec8a97 0xc001ec8a98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 27 10:57:19.952: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 27 10:57:19.952: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t6nfh/replicasets/nginx-deployment-85ddf47c5d,UID:8853418a-2897-11ea-a994-fa163e34d433,ResourceVersion:16225951,Generation:3,CreationTimestamp:2019-12-27 10:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8838663f-2897-11ea-a994-fa163e34d433 0xc001ec8b57 0xc001ec8b58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 27 10:57:20.051: INFO: Pod "nginx-deployment-5c98f8fb5-2ckl8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2ckl8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-5c98f8fb5-2ckl8,UID:a0dd1bdf-2897-11ea-a994-fa163e34d433,ResourceVersion:16225927,Generation:0,CreationTimestamp:2019-12-27 10:57:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 a0cacbff-2897-11ea-a994-fa163e34d433 0xc001d9b077 0xc001d9b078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9b0e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9b100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-27 10:57:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.052: INFO: Pod "nginx-deployment-5c98f8fb5-54lvp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-54lvp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-5c98f8fb5-54lvp,UID:a12e6689-2897-11ea-a994-fa163e34d433,ResourceVersion:16225939,Generation:0,CreationTimestamp:2019-12-27 10:57:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 a0cacbff-2897-11ea-a994-fa163e34d433 0xc001d9b1c7 0xc001d9b1c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9b230} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9b250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-27 10:57:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.056: INFO: Pod "nginx-deployment-5c98f8fb5-6z8bp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6z8bp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-5c98f8fb5-6z8bp,UID:a0e38c7f-2897-11ea-a994-fa163e34d433,ResourceVersion:16225937,Generation:0,CreationTimestamp:2019-12-27 10:57:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 a0cacbff-2897-11ea-a994-fa163e34d433 0xc001d9b317 0xc001d9b318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9b380} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9b3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-27 10:57:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.057: INFO: Pod "nginx-deployment-5c98f8fb5-dpmm7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dpmm7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-5c98f8fb5-dpmm7,UID:a1442fad-2897-11ea-a994-fa163e34d433,ResourceVersion:16225946,Generation:0,CreationTimestamp:2019-12-27 10:57:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 a0cacbff-2897-11ea-a994-fa163e34d433 0xc001d9b467 0xc001d9b468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9b4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9b4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-27 10:57:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.057: INFO: Pod "nginx-deployment-5c98f8fb5-ffmdz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ffmdz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-5c98f8fb5-ffmdz,UID:a7210d7b-2897-11ea-a994-fa163e34d433,ResourceVersion:16225966,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 a0cacbff-2897-11ea-a994-fa163e34d433 0xc001d9b5b7 0xc001d9b5b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9b620} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9b640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.058: INFO: Pod "nginx-deployment-5c98f8fb5-gkwr4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gkwr4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-5c98f8fb5-gkwr4,UID:a70bf29a-2897-11ea-a994-fa163e34d433,ResourceVersion:16225970,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 a0cacbff-2897-11ea-a994-fa163e34d433 0xc001d9b6a0 0xc001d9b6a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9b710} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9b730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:19 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.058: INFO: Pod "nginx-deployment-5c98f8fb5-wt4ml" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wt4ml,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-5c98f8fb5-wt4ml,UID:a720b226-2897-11ea-a994-fa163e34d433,ResourceVersion:16225967,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 a0cacbff-2897-11ea-a994-fa163e34d433 0xc001d9b7a7 0xc001d9b7a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9b810} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9b830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.058: INFO: Pod "nginx-deployment-5c98f8fb5-ztwhs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ztwhs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-5c98f8fb5-ztwhs,UID:a0e39562-2897-11ea-a994-fa163e34d433,ResourceVersion:16225932,Generation:0,CreationTimestamp:2019-12-27 10:57:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 a0cacbff-2897-11ea-a994-fa163e34d433 0xc001d9b890 0xc001d9b891}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9b900} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9b920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-27 10:57:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.058: INFO: Pod "nginx-deployment-85ddf47c5d-2z7r8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2z7r8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-2z7r8,UID:a72133f5-2897-11ea-a994-fa163e34d433,ResourceVersion:16225972,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc001d9b9e7 0xc001d9b9e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9ba50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9ba70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.058: INFO: Pod "nginx-deployment-85ddf47c5d-2zchc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2zchc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-2zchc,UID:a722d33f-2897-11ea-a994-fa163e34d433,ResourceVersion:16225971,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc001d9bad0 0xc001d9bad1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9bb30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9bb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.058: INFO: Pod "nginx-deployment-85ddf47c5d-55w5g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-55w5g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-55w5g,UID:a70b280b-2897-11ea-a994-fa163e34d433,ResourceVersion:16225959,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc001d9bbb0 0xc001d9bbb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9bc10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9bc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:19 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.059: INFO: Pod "nginx-deployment-85ddf47c5d-95gr2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-95gr2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-95gr2,UID:a70d40f1-2897-11ea-a994-fa163e34d433,ResourceVersion:16225964,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc001d9bca7 0xc001d9bca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9bd10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9bd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:19 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.059: INFO: Pod "nginx-deployment-85ddf47c5d-9r986" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9r986,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-9r986,UID:886df42c-2897-11ea-a994-fa163e34d433,ResourceVersion:16225822,Generation:0,CreationTimestamp:2019-12-27 10:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc001d9bda7 0xc001d9bda8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9be50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9be80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-27 10:56:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-27 10:56:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0d985bb1bbb836acda6323262e8dc5cbfa7ea43b5cd2884546f9e19fa54f16e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.059: INFO: Pod "nginx-deployment-85ddf47c5d-b4ndt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b4ndt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-b4ndt,UID:886d3d1a-2897-11ea-a994-fa163e34d433,ResourceVersion:16225876,Generation:0,CreationTimestamp:2019-12-27 10:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc001d9bfb7 0xc001d9bfb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002322020} {node.kubernetes.io/unreachable Exists NoExecute 0xc002322040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-27 10:56:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-27 10:57:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4714e37500a04b2d5d3a55f48406fe095fed5572733dcfa7bd7e0108a378d3e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.059: INFO: Pod "nginx-deployment-85ddf47c5d-c9s4x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c9s4x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-c9s4x,UID:88713786-2897-11ea-a994-fa163e34d433,ResourceVersion:16225879,Generation:0,CreationTimestamp:2019-12-27 10:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc002322107 0xc002322108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002322170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002322190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-27 10:56:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-27 10:57:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://73a8efd7632c6516ee834e7ae67707afbf72faab8ea37bb2d41c568a48bde057}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.059: INFO: Pod "nginx-deployment-85ddf47c5d-dr92z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dr92z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-dr92z,UID:8870d20b-2897-11ea-a994-fa163e34d433,ResourceVersion:16225857,Generation:0,CreationTimestamp:2019-12-27 10:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc002322257 0xc002322258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023222c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023222e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-27 10:56:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-27 10:57:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://596a98d33483237a626d8ba6393bacd28068eb1e6f8c9e24122b2ff46abfb688}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.059: INFO: Pod "nginx-deployment-85ddf47c5d-hdtl6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hdtl6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-hdtl6,UID:a70db915-2897-11ea-a994-fa163e34d433,ResourceVersion:16225969,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc0023223a7 0xc0023223a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002322410} {node.kubernetes.io/unreachable Exists NoExecute 0xc002322430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:19 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.060: INFO: Pod "nginx-deployment-85ddf47c5d-ljffp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ljffp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-ljffp,UID:a720ede9-2897-11ea-a994-fa163e34d433,ResourceVersion:16225965,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc0023224a7 0xc0023224a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002322510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002322530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.060: INFO: Pod "nginx-deployment-85ddf47c5d-lk6dd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lk6dd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-lk6dd,UID:88710139-2897-11ea-a994-fa163e34d433,ResourceVersion:16225868,Generation:0,CreationTimestamp:2019-12-27 10:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc002322590 0xc002322591}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023225f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002322610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-27 10:56:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-27 10:57:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c70deb292fe02ec0c3b1a0daa01f66826c7e7a01c3d6a50ad84b76c4869376c4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.060: INFO: Pod "nginx-deployment-85ddf47c5d-pbq88" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pbq88,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-pbq88,UID:8871446e-2897-11ea-a994-fa163e34d433,ResourceVersion:16225863,Generation:0,CreationTimestamp:2019-12-27 10:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc0023226d7 0xc0023226d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002322740} {node.kubernetes.io/unreachable Exists NoExecute 0xc002322760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-27 10:56:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-27 10:57:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2c2508bcc9c7a1d0aa5f3e37f69dcc5d82d3a93e6080bdf40ea0508baf52908e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.060: INFO: Pod "nginx-deployment-85ddf47c5d-r4lfl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r4lfl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-r4lfl,UID:a7224af8-2897-11ea-a994-fa163e34d433,ResourceVersion:16225968,Generation:0,CreationTimestamp:2019-12-27 10:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc002322827 0xc002322828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002322890} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023228b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.060: INFO: Pod "nginx-deployment-85ddf47c5d-wkddh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wkddh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-wkddh,UID:885fe0bf-2897-11ea-a994-fa163e34d433,ResourceVersion:16225835,Generation:0,CreationTimestamp:2019-12-27 10:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc002322910 0xc002322911}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002322970} {node.kubernetes.io/unreachable Exists NoExecute 0xc002322990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-27 10:56:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-27 10:56:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f3899e112d95372a7020472fade2c22861171deac29fc1b770e5bd1e4b26aafa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 27 10:57:20.061: INFO: Pod "nginx-deployment-85ddf47c5d-z5fqk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z5fqk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t6nfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6nfh/pods/nginx-deployment-85ddf47c5d-z5fqk,UID:887a90d5-2897-11ea-a994-fa163e34d433,ResourceVersion:16225869,Generation:0,CreationTimestamp:2019-12-27 10:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8853418a-2897-11ea-a994-fa163e34d433 0xc002322a57 0xc002322a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4q7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4q7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4q7kb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002322ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002322ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:57:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:56:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-27 10:56:31 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-27 10:57:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bfae68b72b35b638546ea213616f7d46d67b6a124ce545152d77de8ba8e319f3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:57:20.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-t6nfh" for this suite. Dec 27 10:58:42.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:58:45.457: INFO: namespace: e2e-tests-deployment-t6nfh, resource: bindings, ignored listing per whitelist Dec 27 10:58:45.744: INFO: namespace e2e-tests-deployment-t6nfh deletion completed in 1m24.809771309s • [SLOW TEST:138.181 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:58:45.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-dbdec096-2897-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 10:58:48.696: INFO: Waiting up to 5m0s for pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005" in namespace "e2e-tests-secrets-czfzc" to be "success or failure" Dec 27 10:58:48.733: INFO: Pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.615861ms Dec 27 10:58:50.766: INFO: Pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069194823s Dec 27 10:58:52.785: INFO: Pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088811721s Dec 27 10:58:55.027: INFO: Pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330375973s Dec 27 10:58:57.048: INFO: Pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.351468647s Dec 27 10:59:00.520: INFO: Pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.823522044s Dec 27 10:59:02.549: INFO: Pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.852439061s Dec 27 10:59:04.587: INFO: Pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.890674498s STEP: Saw pod success Dec 27 10:59:04.587: INFO: Pod "pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 10:59:04.603: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 27 10:59:05.954: INFO: Waiting for pod pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005 to disappear Dec 27 10:59:05.961: INFO: Pod pod-secrets-dc0a7adb-2897-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:59:05.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-czfzc" for this suite. Dec 27 10:59:12.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:59:12.257: INFO: namespace: e2e-tests-secrets-czfzc, resource: bindings, ignored listing per whitelist Dec 27 10:59:12.309: INFO: namespace e2e-tests-secrets-czfzc deletion completed in 6.30527137s • [SLOW TEST:26.565 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:59:12.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 10:59:12.455: INFO: Creating deployment "test-recreate-deployment" Dec 27 10:59:12.517: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 27 10:59:12.578: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Dec 27 10:59:15.206: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 27 10:59:15.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 10:59:17.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 10:59:19.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 10:59:21.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041152, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 10:59:23.231: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 27 10:59:23.259: INFO: Updating deployment test-recreate-deployment Dec 27 10:59:23.259: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 27 10:59:24.032: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-5vp4k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5vp4k/deployments/test-recreate-deployment,UID:ea513568-2897-11ea-a994-fa163e34d433,ResourceVersion:16226416,Generation:2,CreationTimestamp:2019-12-27 10:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-27 10:59:23 +0000 UTC 2019-12-27 10:59:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-27 10:59:23 +0000 UTC 2019-12-27 10:59:12 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 27 10:59:24.135: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-5vp4k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5vp4k/replicasets/test-recreate-deployment-589c4bfd,UID:f0fc147a-2897-11ea-a994-fa163e34d433,ResourceVersion:16226413,Generation:1,CreationTimestamp:2019-12-27 10:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ea513568-2897-11ea-a994-fa163e34d433 0xc00180911f 0xc001809130}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 27 10:59:24.135: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 27 10:59:24.135: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-5vp4k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5vp4k/replicasets/test-recreate-deployment-5bf7f65dc,UID:ea641bd5-2897-11ea-a994-fa163e34d433,ResourceVersion:16226404,Generation:2,CreationTimestamp:2019-12-27 10:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ea513568-2897-11ea-a994-fa163e34d433 0xc0018092e0 0xc0018092e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 27 10:59:24.169: INFO: Pod "test-recreate-deployment-589c4bfd-22pm6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-22pm6,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-5vp4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5vp4k/pods/test-recreate-deployment-589c4bfd-22pm6,UID:f10a7ee0-2897-11ea-a994-fa163e34d433,ResourceVersion:16226417,Generation:0,CreationTimestamp:2019-12-27 10:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd f0fc147a-2897-11ea-a994-fa163e34d433 0xc0009df35f 0xc0009df370}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xktz6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xktz6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xktz6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009df3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009df3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:59:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:59:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-27 10:59:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:59:24.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-5vp4k" for this suite. Dec 27 10:59:36.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:59:36.541: INFO: namespace: e2e-tests-deployment-5vp4k, resource: bindings, ignored listing per whitelist Dec 27 10:59:36.561: INFO: namespace e2e-tests-deployment-5vp4k deletion completed in 12.372465357s • [SLOW TEST:24.251 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:59:36.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 10:59:37.007: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f8d7db36-2897-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001770f62), BlockOwnerDeletion:(*bool)(0xc001770f63)}} Dec 27 10:59:37.042: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f8d4c87c-2897-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0017710fa), BlockOwnerDeletion:(*bool)(0xc0017710fb)}} Dec 27 10:59:37.300: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f8d641dd-2897-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00193969a), BlockOwnerDeletion:(*bool)(0xc00193969b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:59:42.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-f5rk6" for this suite. Dec 27 10:59:50.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:59:50.533: INFO: namespace: e2e-tests-gc-f5rk6, resource: bindings, ignored listing per whitelist Dec 27 10:59:50.611: INFO: namespace e2e-tests-gc-f5rk6 deletion completed in 8.270179218s • [SLOW TEST:14.049 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:59:50.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 10:59:50.826: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 10:59:52.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-9b4qc" for this suite. Dec 27 10:59:58.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 10:59:58.123: INFO: namespace: e2e-tests-custom-resource-definition-9b4qc, resource: bindings, ignored listing per whitelist Dec 27 10:59:58.179: INFO: namespace e2e-tests-custom-resource-definition-9b4qc deletion completed in 6.153467693s • [SLOW TEST:7.568 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 10:59:58.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Dec 27 10:59:59.207: INFO: created pod pod-service-account-defaultsa Dec 27 10:59:59.207: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 27 10:59:59.241: INFO: created pod pod-service-account-mountsa Dec 27 10:59:59.241: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 27 10:59:59.265: INFO: created pod pod-service-account-nomountsa Dec 27 10:59:59.265: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 27 10:59:59.399: INFO: created pod pod-service-account-defaultsa-mountspec Dec 27 10:59:59.399: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 27 10:59:59.527: INFO: created pod pod-service-account-mountsa-mountspec Dec 27 10:59:59.527: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 27 10:59:59.605: INFO: created pod pod-service-account-nomountsa-mountspec Dec 27 10:59:59.605: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 27 10:59:59.664: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 27 10:59:59.664: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 27 10:59:59.686: INFO: created pod pod-service-account-mountsa-nomountspec Dec 27 10:59:59.686: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 27 11:00:00.804: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 27 11:00:00.804: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:00:00.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-fc66k" for this suite. Dec 27 11:00:30.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:00:31.030: INFO: namespace: e2e-tests-svcaccounts-fc66k, resource: bindings, ignored listing per whitelist Dec 27 11:00:31.062: INFO: namespace e2e-tests-svcaccounts-fc66k deletion completed in 28.44073077s • [SLOW TEST:32.882 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:00:31.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 27 11:00:31.218: INFO: Waiting up to 5m0s for pod "pod-1940903b-2898-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-724xp" to be "success or failure" Dec 27 11:00:31.297: INFO: Pod "pod-1940903b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 78.802891ms Dec 27 11:00:33.542: INFO: Pod "pod-1940903b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324114545s Dec 27 11:00:35.562: INFO: Pod "pod-1940903b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343496891s Dec 27 11:00:37.572: INFO: Pod "pod-1940903b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353807682s Dec 27 11:00:39.598: INFO: Pod "pod-1940903b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379915989s Dec 27 11:00:41.605: INFO: Pod "pod-1940903b-2898-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.386508445s STEP: Saw pod success Dec 27 11:00:41.605: INFO: Pod "pod-1940903b-2898-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:00:41.608: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1940903b-2898-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 11:00:42.247: INFO: Waiting for pod pod-1940903b-2898-11ea-bad5-0242ac110005 to disappear Dec 27 11:00:42.535: INFO: Pod pod-1940903b-2898-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:00:42.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-724xp" for this suite. Dec 27 11:00:48.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:00:49.064: INFO: namespace: e2e-tests-emptydir-724xp, resource: bindings, ignored listing per whitelist Dec 27 11:00:49.190: INFO: namespace e2e-tests-emptydir-724xp deletion completed in 6.590428357s • [SLOW TEST:18.128 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:00:49.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 27 11:00:49.451: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-cjkz9,SelfLink:/api/v1/namespaces/e2e-tests-watch-cjkz9/configmaps/e2e-watch-test-resource-version,UID:2416890a-2898-11ea-a994-fa163e34d433,ResourceVersion:16226723,Generation:0,CreationTimestamp:2019-12-27 11:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 27 11:00:49.451: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-cjkz9,SelfLink:/api/v1/namespaces/e2e-tests-watch-cjkz9/configmaps/e2e-watch-test-resource-version,UID:2416890a-2898-11ea-a994-fa163e34d433,ResourceVersion:16226724,Generation:0,CreationTimestamp:2019-12-27 11:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:00:49.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-cjkz9" for this suite. Dec 27 11:00:55.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:00:55.532: INFO: namespace: e2e-tests-watch-cjkz9, resource: bindings, ignored listing per whitelist Dec 27 11:00:55.608: INFO: namespace e2e-tests-watch-cjkz9 deletion completed in 6.15214796s • [SLOW TEST:6.418 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:00:55.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Dec 27 11:00:55.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vxfm6' Dec 27 11:00:58.301: INFO: stderr: "" Dec 27 11:00:58.301: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Dec 27 11:00:59.637: INFO: Selector matched 1 pods for map[app:redis] Dec 27 11:00:59.637: INFO: Found 0 / 1 Dec 27 11:01:00.327: INFO: Selector matched 1 pods for map[app:redis] Dec 27 11:01:00.327: INFO: Found 0 / 1 Dec 27 11:01:01.318: INFO: Selector matched 1 pods for map[app:redis] Dec 27 11:01:01.318: INFO: Found 0 / 1 Dec 27 11:01:02.315: INFO: Selector matched 1 pods for map[app:redis] Dec 27 11:01:02.315: INFO: Found 0 / 1 Dec 27 11:01:03.916: INFO: Selector matched 1 pods for map[app:redis] Dec 27 11:01:03.916: INFO: Found 0 / 1 Dec 27 11:01:04.367: INFO: Selector matched 1 pods for map[app:redis] Dec 27 11:01:04.367: INFO: Found 0 / 1 Dec 27 11:01:05.333: INFO: Selector matched 1 pods for map[app:redis] Dec 27 11:01:05.333: INFO: Found 0 / 1 Dec 27 11:01:06.316: INFO: Selector matched 1 pods for map[app:redis] Dec 27 11:01:06.316: INFO: Found 1 / 1 Dec 27 11:01:06.316: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 27 11:01:06.321: INFO: Selector matched 1 pods for map[app:redis] Dec 27 11:01:06.321: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 27 11:01:06.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4zg6j redis-master --namespace=e2e-tests-kubectl-vxfm6' Dec 27 11:01:06.592: INFO: stderr: "" Dec 27 11:01:06.592: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Dec 11:01:05.680 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Dec 11:01:05.680 # Server started, Redis version 3.2.12\n1:M 27 Dec 11:01:05.680 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Dec 11:01:05.680 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 27 11:01:06.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zg6j redis-master --namespace=e2e-tests-kubectl-vxfm6 --tail=1' Dec 27 11:01:06.702: INFO: stderr: "" Dec 27 11:01:06.702: INFO: stdout: "1:M 27 Dec 11:01:05.680 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 27 11:01:06.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zg6j redis-master --namespace=e2e-tests-kubectl-vxfm6 --limit-bytes=1' Dec 27 11:01:06.810: INFO: stderr: "" Dec 27 11:01:06.810: INFO: stdout: " " STEP: exposing timestamps Dec 27 11:01:06.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zg6j redis-master --namespace=e2e-tests-kubectl-vxfm6 --tail=1 --timestamps' Dec 27 11:01:06.922: INFO: stderr: "" Dec 27 11:01:06.922: INFO: stdout: "2019-12-27T11:01:05.681905788Z 1:M 27 Dec 11:01:05.680 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 27 11:01:09.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zg6j redis-master --namespace=e2e-tests-kubectl-vxfm6 --since=1s' Dec 27 11:01:09.594: INFO: stderr: "" Dec 27 11:01:09.594: INFO: stdout: "" Dec 27 11:01:09.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zg6j redis-master --namespace=e2e-tests-kubectl-vxfm6 --since=24h' Dec 27 11:01:09.720: INFO: stderr: "" Dec 27 11:01:09.720: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Dec 11:01:05.680 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Dec 11:01:05.680 # Server started, Redis version 3.2.12\n1:M 27 Dec 11:01:05.680 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Dec 11:01:05.680 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Dec 27 11:01:09.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vxfm6' Dec 27 11:01:09.839: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 27 11:01:09.839: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 27 11:01:09.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-vxfm6' Dec 27 11:01:09.960: INFO: stderr: "No resources found.\n" Dec 27 11:01:09.960: INFO: stdout: "" Dec 27 11:01:09.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-vxfm6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 27 11:01:10.086: INFO: stderr: "" Dec 27 11:01:10.086: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:01:10.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vxfm6" for this suite. Dec 27 11:01:16.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:01:16.387: INFO: namespace: e2e-tests-kubectl-vxfm6, resource: bindings, ignored listing per whitelist Dec 27 11:01:16.468: INFO: namespace e2e-tests-kubectl-vxfm6 deletion completed in 6.360277046s • [SLOW TEST:20.859 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:01:16.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 27 11:01:16.715: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 27 11:01:21.747: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:01:23.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-4wqpf" for this suite. Dec 27 11:01:35.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:01:36.567: INFO: namespace: e2e-tests-replication-controller-4wqpf, resource: bindings, ignored listing per whitelist Dec 27 11:01:36.931: INFO: namespace e2e-tests-replication-controller-4wqpf deletion completed in 13.616012796s • [SLOW TEST:20.463 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:01:36.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:01:48.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-cr6v4" for this suite. Dec 27 11:02:12.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:02:12.708: INFO: namespace: e2e-tests-replication-controller-cr6v4, resource: bindings, ignored listing per whitelist Dec 27 11:02:12.731: INFO: namespace e2e-tests-replication-controller-cr6v4 deletion completed in 24.423372773s • [SLOW TEST:35.800 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:02:12.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1227 11:02:26.814100 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 27 11:02:26.814: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:02:26.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rjhrb" for this suite. Dec 27 11:02:42.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:02:42.996: INFO: namespace: e2e-tests-gc-rjhrb, resource: bindings, ignored listing per whitelist Dec 27 11:02:43.100: INFO: namespace e2e-tests-gc-rjhrb deletion completed in 16.258709373s • [SLOW TEST:30.368 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:02:43.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-694b81ab-2898-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 11:02:45.569: INFO: Waiting up to 5m0s for pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005" in namespace "e2e-tests-secrets-j9qpx" to be "success or failure" Dec 27 11:02:45.991: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 421.902671ms Dec 27 11:02:49.760: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190325053s Dec 27 11:02:51.772: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202195199s Dec 27 11:02:53.794: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.224978565s Dec 27 11:02:56.283: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.713464509s Dec 27 11:02:58.295: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.725782798s Dec 27 11:03:00.389: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.819733115s Dec 27 11:03:02.401: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.831159251s Dec 27 11:03:04.434: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.864853847s Dec 27 11:03:06.454: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.8843385s STEP: Saw pod success Dec 27 11:03:06.454: INFO: Pod "pod-secrets-694df97b-2898-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:03:06.459: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-694df97b-2898-11ea-bad5-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 27 11:03:06.984: INFO: Waiting for pod pod-secrets-694df97b-2898-11ea-bad5-0242ac110005 to disappear Dec 27 11:03:07.015: INFO: Pod pod-secrets-694df97b-2898-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:03:07.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-j9qpx" for this suite. Dec 27 11:03:13.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:03:13.174: INFO: namespace: e2e-tests-secrets-j9qpx, resource: bindings, ignored listing per whitelist Dec 27 11:03:13.334: INFO: namespace e2e-tests-secrets-j9qpx deletion completed in 6.289352034s • [SLOW TEST:30.234 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:03:13.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-7a120c78-2898-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 11:03:13.684: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-kfhcq" to be "success or failure" Dec 27 11:03:13.700: INFO: Pod "pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.164403ms Dec 27 11:03:15.785: INFO: Pod "pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100950593s Dec 27 11:03:17.812: INFO: Pod "pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127913933s Dec 27 11:03:20.023: INFO: Pod "pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.33962883s Dec 27 11:03:22.079: INFO: Pod "pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395531904s Dec 27 11:03:24.416: INFO: Pod "pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.732221378s STEP: Saw pod success Dec 27 11:03:24.416: INFO: Pod "pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:03:24.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 27 11:03:24.928: INFO: Waiting for pod pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005 to disappear Dec 27 11:03:24.947: INFO: Pod pod-projected-secrets-7a150b02-2898-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:03:24.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kfhcq" for this suite. Dec 27 11:03:30.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:03:31.084: INFO: namespace: e2e-tests-projected-kfhcq, resource: bindings, ignored listing per whitelist Dec 27 11:03:31.134: INFO: namespace e2e-tests-projected-kfhcq deletion completed in 6.175035868s • [SLOW TEST:17.800 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:03:31.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-84a21927-2898-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 11:03:31.456: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-nm8gd" to be "success or failure" Dec 27 11:03:31.505: INFO: Pod "pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.199055ms Dec 27 11:03:33.523: INFO: Pod "pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066256008s Dec 27 11:03:35.536: INFO: Pod "pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079870971s Dec 27 11:03:37.709: INFO: Pod "pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252979725s Dec 27 11:03:39.947: INFO: Pod "pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491153028s Dec 27 11:03:41.995: INFO: Pod "pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.538590239s STEP: Saw pod success Dec 27 11:03:41.995: INFO: Pod "pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:03:42.001: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 27 11:03:42.557: INFO: Waiting for pod pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005 to disappear Dec 27 11:03:42.581: INFO: Pod pod-projected-secrets-84abb9e4-2898-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:03:42.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nm8gd" for this suite. Dec 27 11:03:50.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:03:50.663: INFO: namespace: e2e-tests-projected-nm8gd, resource: bindings, ignored listing per whitelist Dec 27 11:03:50.747: INFO: namespace e2e-tests-projected-nm8gd deletion completed in 8.155549812s • [SLOW TEST:19.613 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:03:50.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 27 11:03:50.916: INFO: Waiting up to 5m0s for pod "pod-9048fbec-2898-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-gqppc" to be "success or failure" Dec 27 11:03:51.015: INFO: Pod "pod-9048fbec-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.530782ms Dec 27 11:03:53.036: INFO: Pod "pod-9048fbec-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119832113s Dec 27 11:03:55.051: INFO: Pod "pod-9048fbec-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135309127s Dec 27 11:03:57.515: INFO: Pod "pod-9048fbec-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598972408s Dec 27 11:03:59.532: INFO: Pod "pod-9048fbec-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.615811726s Dec 27 11:04:01.544: INFO: Pod "pod-9048fbec-2898-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.62846232s STEP: Saw pod success Dec 27 11:04:01.544: INFO: Pod "pod-9048fbec-2898-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:04:01.548: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9048fbec-2898-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 11:04:01.933: INFO: Waiting for pod pod-9048fbec-2898-11ea-bad5-0242ac110005 to disappear Dec 27 11:04:01.944: INFO: Pod pod-9048fbec-2898-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:04:01.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gqppc" for this suite. Dec 27 11:04:10.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:04:10.159: INFO: namespace: e2e-tests-emptydir-gqppc, resource: bindings, ignored listing per whitelist Dec 27 11:04:10.212: INFO: namespace e2e-tests-emptydir-gqppc deletion completed in 8.262199921s • [SLOW TEST:19.465 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:04:10.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Dec 27 11:04:11.033: INFO: Waiting up to 5m0s for pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx" in namespace "e2e-tests-svcaccounts-x2w9m" to be "success or failure" Dec 27 11:04:11.057: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 24.331423ms Dec 27 11:04:13.295: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261561809s Dec 27 11:04:15.336: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303467533s Dec 27 11:04:17.348: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.31493156s Dec 27 11:04:19.705: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.671672016s Dec 27 11:04:21.953: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.919787863s Dec 27 11:04:24.275: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 13.241804909s Dec 27 11:04:26.284: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.251426505s STEP: Saw pod success Dec 27 11:04:26.284: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx" satisfied condition "success or failure" Dec 27 11:04:26.288: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx container token-test: STEP: delete the pod Dec 27 11:04:27.084: INFO: Waiting for pod pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx to disappear Dec 27 11:04:27.101: INFO: Pod pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-gr6wx no longer exists STEP: Creating a pod to test consume service account root CA Dec 27 11:04:27.115: INFO: Waiting up to 5m0s for pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd" in namespace "e2e-tests-svcaccounts-x2w9m" to be "success or failure" Dec 27 11:04:27.127: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.275263ms Dec 27 11:04:29.305: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189958176s Dec 27 11:04:31.317: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202381058s Dec 27 11:04:33.332: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21744088s Dec 27 11:04:35.348: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.233850032s Dec 27 11:04:37.363: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.248321894s Dec 27 11:04:39.383: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.26857059s Dec 27 11:04:41.406: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.291825828s Dec 27 11:04:43.418: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.303396229s STEP: Saw pod success Dec 27 11:04:43.418: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd" satisfied condition "success or failure" Dec 27 11:04:43.425: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd container root-ca-test: STEP: delete the pod Dec 27 11:04:43.633: INFO: Waiting for pod pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd to disappear Dec 27 11:04:43.713: INFO: Pod pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-k4bqd no longer exists STEP: Creating a pod to test consume service account namespace Dec 27 11:04:43.755: INFO: Waiting up to 5m0s for pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c" in namespace "e2e-tests-svcaccounts-x2w9m" to be "success or failure" Dec 27 11:04:43.771: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.351153ms Dec 27 11:04:45.816: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06079681s Dec 27 11:04:47.837: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082111495s Dec 27 11:04:49.942: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187031778s Dec 27 11:04:52.055: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.299436111s Dec 27 11:04:54.316: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.560678684s Dec 27 11:04:56.547: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.791401192s Dec 27 11:04:58.575: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.81954343s Dec 27 11:05:00.588: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.832907565s STEP: Saw pod success Dec 27 11:05:00.588: INFO: Pod "pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c" satisfied condition "success or failure" Dec 27 11:05:00.619: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c container namespace-test: STEP: delete the pod Dec 27 11:05:00.882: INFO: Waiting for pod pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c to disappear Dec 27 11:05:00.900: INFO: Pod pod-service-account-9c45afd5-2898-11ea-bad5-0242ac110005-rf66c no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:05:00.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-x2w9m" for this suite. Dec 27 11:05:09.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:05:09.091: INFO: namespace: e2e-tests-svcaccounts-x2w9m, resource: bindings, ignored listing per whitelist Dec 27 11:05:09.178: INFO: namespace e2e-tests-svcaccounts-x2w9m deletion completed in 8.263023732s • [SLOW TEST:58.965 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:05:09.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 27 11:05:09.413: INFO: Waiting up to 5m0s for pod "pod-bf124abd-2898-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-dfl8g" to be "success or failure" Dec 27 11:05:09.481: INFO: Pod "pod-bf124abd-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 68.36487ms Dec 27 11:05:11.499: INFO: Pod "pod-bf124abd-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086203309s Dec 27 11:05:13.515: INFO: Pod "pod-bf124abd-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102354666s Dec 27 11:05:15.617: INFO: Pod "pod-bf124abd-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20409366s Dec 27 11:05:17.635: INFO: Pod "pod-bf124abd-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222271207s Dec 27 11:05:19.652: INFO: Pod "pod-bf124abd-2898-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.239258598s STEP: Saw pod success Dec 27 11:05:19.652: INFO: Pod "pod-bf124abd-2898-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:05:19.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bf124abd-2898-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 11:05:20.145: INFO: Waiting for pod pod-bf124abd-2898-11ea-bad5-0242ac110005 to disappear Dec 27 11:05:20.401: INFO: Pod pod-bf124abd-2898-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:05:20.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dfl8g" for this suite. Dec 27 11:05:26.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:05:26.814: INFO: namespace: e2e-tests-emptydir-dfl8g, resource: bindings, ignored listing per whitelist Dec 27 11:05:26.867: INFO: namespace e2e-tests-emptydir-dfl8g deletion completed in 6.436558987s • [SLOW TEST:17.689 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:05:26.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Dec 27 11:05:37.428: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:06:03.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-l2pl9" for this suite. Dec 27 11:06:09.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:06:09.903: INFO: namespace: e2e-tests-namespaces-l2pl9, resource: bindings, ignored listing per whitelist Dec 27 11:06:09.950: INFO: namespace e2e-tests-namespaces-l2pl9 deletion completed in 6.173995122s STEP: Destroying namespace "e2e-tests-nsdeletetest-nw2mh" for this suite. Dec 27 11:06:09.952: INFO: Namespace e2e-tests-nsdeletetest-nw2mh was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-wgx97" for this suite. Dec 27 11:06:16.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:06:16.150: INFO: namespace: e2e-tests-nsdeletetest-wgx97, resource: bindings, ignored listing per whitelist Dec 27 11:06:16.154: INFO: namespace e2e-tests-nsdeletetest-wgx97 deletion completed in 6.201533231s • [SLOW TEST:49.286 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:06:16.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 11:06:16.318: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-d4vfl" to be "success or failure" Dec 27 11:06:16.339: INFO: Pod "downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.810265ms Dec 27 11:06:18.874: INFO: Pod "downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.555577638s Dec 27 11:06:20.906: INFO: Pod "downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.587436185s Dec 27 11:06:23.073: INFO: Pod "downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.754964667s Dec 27 11:06:25.082: INFO: Pod "downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.763633845s Dec 27 11:06:27.249: INFO: Pod "downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.930753192s STEP: Saw pod success Dec 27 11:06:27.249: INFO: Pod "downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:06:27.259: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 11:06:27.462: INFO: Waiting for pod downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005 to disappear Dec 27 11:06:27.500: INFO: Pod downwardapi-volume-e6ed0868-2898-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:06:27.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-d4vfl" for this suite. Dec 27 11:06:33.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:06:33.656: INFO: namespace: e2e-tests-downward-api-d4vfl, resource: bindings, ignored listing per whitelist Dec 27 11:06:33.781: INFO: namespace e2e-tests-downward-api-d4vfl deletion completed in 6.270669468s • [SLOW TEST:17.627 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:06:33.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:07:30.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-hth2w" for this suite. Dec 27 11:07:36.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:07:36.237: INFO: namespace: e2e-tests-container-runtime-hth2w, resource: bindings, ignored listing per whitelist Dec 27 11:07:36.281: INFO: namespace e2e-tests-container-runtime-hth2w deletion completed in 6.17093273s • [SLOW TEST:62.500 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:07:36.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 11:07:36.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-kqskk" to be "success or failure" Dec 27 11:07:36.702: INFO: Pod "downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.425069ms Dec 27 11:07:38.767: INFO: Pod "downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156975891s Dec 27 11:07:40.793: INFO: Pod "downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182765421s Dec 27 11:07:43.006: INFO: Pod "downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.39592422s Dec 27 11:07:45.024: INFO: Pod "downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4142793s Dec 27 11:07:47.047: INFO: Pod "downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.436323792s STEP: Saw pod success Dec 27 11:07:47.047: INFO: Pod "downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:07:47.057: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 11:07:47.276: INFO: Waiting for pod downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:07:47.295: INFO: Pod downwardapi-volume-16c56525-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:07:47.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kqskk" for this suite. Dec 27 11:07:53.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:07:53.467: INFO: namespace: e2e-tests-downward-api-kqskk, resource: bindings, ignored listing per whitelist Dec 27 11:07:53.524: INFO: namespace e2e-tests-downward-api-kqskk deletion completed in 6.214945298s • [SLOW TEST:17.242 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:07:53.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:08:04.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tzgqj" for this suite. Dec 27 11:08:46.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:08:46.260: INFO: namespace: e2e-tests-kubelet-test-tzgqj, resource: bindings, ignored listing per whitelist Dec 27 11:08:46.336: INFO: namespace e2e-tests-kubelet-test-tzgqj deletion completed in 42.260254893s • [SLOW TEST:52.811 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:08:46.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 27 11:08:46.649: INFO: Waiting up to 5m0s for pod "pod-408c6730-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-qxm6f" to be "success or failure" Dec 27 11:08:46.694: INFO: Pod "pod-408c6730-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.486445ms Dec 27 11:08:48.739: INFO: Pod "pod-408c6730-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090441209s Dec 27 11:08:50.810: INFO: Pod "pod-408c6730-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161464555s Dec 27 11:08:52.843: INFO: Pod "pod-408c6730-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19415715s Dec 27 11:08:54.875: INFO: Pod "pod-408c6730-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226062793s Dec 27 11:08:56.891: INFO: Pod "pod-408c6730-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.242133071s STEP: Saw pod success Dec 27 11:08:56.891: INFO: Pod "pod-408c6730-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:08:56.898: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-408c6730-2899-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 11:08:57.228: INFO: Waiting for pod pod-408c6730-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:08:57.241: INFO: Pod pod-408c6730-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:08:57.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qxm6f" for this suite. Dec 27 11:09:03.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:09:03.518: INFO: namespace: e2e-tests-emptydir-qxm6f, resource: bindings, ignored listing per whitelist Dec 27 11:09:03.701: INFO: namespace e2e-tests-emptydir-qxm6f deletion completed in 6.400971512s • [SLOW TEST:17.365 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:09:03.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Dec 27 11:09:14.373: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-4aead305-2899-11ea-bad5-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-qjs8f", SelfLink:"/api/v1/namespaces/e2e-tests-pods-qjs8f/pods/pod-submit-remove-4aead305-2899-11ea-bad5-0242ac110005", UID:"4b00df93-2899-11ea-a994-fa163e34d433", ResourceVersion:"16227977", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713041744, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"23235055", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7ccqb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a0a3c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7ccqb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002136c88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001606420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002136fe0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002137020)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002137028), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00213702c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041744, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041752, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041752, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713041744, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0009d0b40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0009d0b60), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://fb67bf04be7bb4f7ce4a2ed49dd11304411880b09be3a46aca156f18915ba334"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:09:20.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qjs8f" for this suite. Dec 27 11:09:27.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:09:27.083: INFO: namespace: e2e-tests-pods-qjs8f, resource: bindings, ignored listing per whitelist Dec 27 11:09:27.184: INFO: namespace e2e-tests-pods-qjs8f deletion completed in 6.196314625s • [SLOW TEST:23.483 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:09:27.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-dxhd STEP: Creating a pod to test atomic-volume-subpath Dec 27 11:09:27.422: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dxhd" in namespace "e2e-tests-subpath-ts747" to be "success or failure" Dec 27 11:09:27.505: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Pending", Reason="", readiness=false. Elapsed: 82.630413ms Dec 27 11:09:29.559: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137110927s Dec 27 11:09:31.571: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148636388s Dec 27 11:09:33.836: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413188455s Dec 27 11:09:35.862: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.439338279s Dec 27 11:09:37.880: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.457659516s Dec 27 11:09:39.913: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.490301681s Dec 27 11:09:41.937: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.514364451s Dec 27 11:09:44.026: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Running", Reason="", readiness=true. Elapsed: 16.603651902s Dec 27 11:09:46.039: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Running", Reason="", readiness=false. Elapsed: 18.616985144s Dec 27 11:09:48.053: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Running", Reason="", readiness=false. Elapsed: 20.630574689s Dec 27 11:09:50.067: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Running", Reason="", readiness=false. Elapsed: 22.644287076s Dec 27 11:09:52.083: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Running", Reason="", readiness=false. Elapsed: 24.660544576s Dec 27 11:09:54.209: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Running", Reason="", readiness=false. Elapsed: 26.786188221s Dec 27 11:09:56.225: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Running", Reason="", readiness=false. Elapsed: 28.802631247s Dec 27 11:09:58.251: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Running", Reason="", readiness=false. Elapsed: 30.828535142s Dec 27 11:10:00.415: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Running", Reason="", readiness=false. Elapsed: 32.992642671s Dec 27 11:10:02.449: INFO: Pod "pod-subpath-test-projected-dxhd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.026403679s STEP: Saw pod success Dec 27 11:10:02.449: INFO: Pod "pod-subpath-test-projected-dxhd" satisfied condition "success or failure" Dec 27 11:10:02.468: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-dxhd container test-container-subpath-projected-dxhd: STEP: delete the pod Dec 27 11:10:02.632: INFO: Waiting for pod pod-subpath-test-projected-dxhd to disappear Dec 27 11:10:02.641: INFO: Pod pod-subpath-test-projected-dxhd no longer exists STEP: Deleting pod pod-subpath-test-projected-dxhd Dec 27 11:10:02.641: INFO: Deleting pod "pod-subpath-test-projected-dxhd" in namespace "e2e-tests-subpath-ts747" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:10:02.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-ts747" for this suite. Dec 27 11:10:08.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:10:08.801: INFO: namespace: e2e-tests-subpath-ts747, resource: bindings, ignored listing per whitelist Dec 27 11:10:08.902: INFO: namespace e2e-tests-subpath-ts747 deletion completed in 6.245878243s • [SLOW TEST:41.717 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:10:08.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-5xp99/configmap-test-71c711dc-2899-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 11:10:09.248: INFO: Waiting up to 5m0s for pod "pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-configmap-5xp99" to be "success or failure" Dec 27 11:10:09.265: INFO: Pod "pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.118508ms Dec 27 11:10:11.536: INFO: Pod "pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288311655s Dec 27 11:10:14.072: INFO: Pod "pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.824237347s Dec 27 11:10:16.088: INFO: Pod "pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.840269815s Dec 27 11:10:18.105: INFO: Pod "pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.857231964s STEP: Saw pod success Dec 27 11:10:18.105: INFO: Pod "pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:10:18.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005 container env-test: STEP: delete the pod Dec 27 11:10:18.370: INFO: Waiting for pod pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:10:18.379: INFO: Pod pod-configmaps-71c8a405-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:10:18.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5xp99" for this suite. Dec 27 11:10:24.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:10:24.670: INFO: namespace: e2e-tests-configmap-5xp99, resource: bindings, ignored listing per whitelist Dec 27 11:10:24.755: INFO: namespace e2e-tests-configmap-5xp99 deletion completed in 6.370574837s • [SLOW TEST:15.852 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:10:24.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 11:10:24.975: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-9xw5f" to be "success or failure" Dec 27 11:10:25.133: INFO: Pod "downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 157.837302ms Dec 27 11:10:27.147: INFO: Pod "downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171524542s Dec 27 11:10:29.158: INFO: Pod "downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182367205s Dec 27 11:10:31.753: INFO: Pod "downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.777483606s Dec 27 11:10:33.770: INFO: Pod "downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.794619799s Dec 27 11:10:35.785: INFO: Pod "downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.809362486s STEP: Saw pod success Dec 27 11:10:35.785: INFO: Pod "downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:10:35.791: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 11:10:36.008: INFO: Waiting for pod downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:10:36.054: INFO: Pod downwardapi-volume-7b271583-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:10:36.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9xw5f" for this suite. Dec 27 11:10:42.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:10:42.238: INFO: namespace: e2e-tests-downward-api-9xw5f, resource: bindings, ignored listing per whitelist Dec 27 11:10:42.268: INFO: namespace e2e-tests-downward-api-9xw5f deletion completed in 6.203239648s • [SLOW TEST:17.513 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:10:42.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 11:10:42.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-z5qf4" to be "success or failure" Dec 27 11:10:42.478: INFO: Pod "downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.386734ms Dec 27 11:10:44.523: INFO: Pod "downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065877548s Dec 27 11:10:46.562: INFO: Pod "downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104817844s Dec 27 11:10:48.899: INFO: Pod "downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442271096s Dec 27 11:10:51.034: INFO: Pod "downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.577583834s Dec 27 11:10:53.047: INFO: Pod "downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.589801789s STEP: Saw pod success Dec 27 11:10:53.047: INFO: Pod "downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:10:53.052: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 11:10:53.194: INFO: Waiting for pod downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:10:53.213: INFO: Pod downwardapi-volume-858b28e8-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:10:53.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z5qf4" for this suite. Dec 27 11:10:59.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:10:59.517: INFO: namespace: e2e-tests-projected-z5qf4, resource: bindings, ignored listing per whitelist Dec 27 11:10:59.592: INFO: namespace e2e-tests-projected-z5qf4 deletion completed in 6.36942012s • [SLOW TEST:17.324 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:10:59.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 27 11:10:59.834: INFO: Waiting up to 5m0s for pod "downward-api-8fee568d-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-75xkc" to be "success or failure" Dec 27 11:10:59.850: INFO: Pod "downward-api-8fee568d-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.162693ms Dec 27 11:11:01.879: INFO: Pod "downward-api-8fee568d-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04481288s Dec 27 11:11:03.930: INFO: Pod "downward-api-8fee568d-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095578777s Dec 27 11:11:05.951: INFO: Pod "downward-api-8fee568d-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116469797s Dec 27 11:11:08.439: INFO: Pod "downward-api-8fee568d-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.604635142s Dec 27 11:11:10.454: INFO: Pod "downward-api-8fee568d-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.620026657s Dec 27 11:11:12.474: INFO: Pod "downward-api-8fee568d-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.639703703s STEP: Saw pod success Dec 27 11:11:12.474: INFO: Pod "downward-api-8fee568d-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:11:12.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8fee568d-2899-11ea-bad5-0242ac110005 container dapi-container: STEP: delete the pod Dec 27 11:11:12.728: INFO: Waiting for pod downward-api-8fee568d-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:11:13.015: INFO: Pod downward-api-8fee568d-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:11:13.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-75xkc" for this suite. Dec 27 11:11:19.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:11:19.274: INFO: namespace: e2e-tests-downward-api-75xkc, resource: bindings, ignored listing per whitelist Dec 27 11:11:19.277: INFO: namespace e2e-tests-downward-api-75xkc deletion completed in 6.248181971s • [SLOW TEST:19.684 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:11:19.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9bcf838d-2899-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 11:11:19.848: INFO: Waiting up to 5m0s for pod "pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-configmap-stj75" to be "success or failure" Dec 27 11:11:19.865: INFO: Pod "pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.394792ms Dec 27 11:11:22.014: INFO: Pod "pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16648489s Dec 27 11:11:24.037: INFO: Pod "pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18875727s Dec 27 11:11:26.053: INFO: Pod "pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204916134s Dec 27 11:11:29.048: INFO: Pod "pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.19992762s Dec 27 11:11:31.076: INFO: Pod "pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.227811152s STEP: Saw pod success Dec 27 11:11:31.076: INFO: Pod "pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:11:31.090: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 27 11:11:31.216: INFO: Waiting for pod pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:11:31.235: INFO: Pod pod-configmaps-9bd1a7ca-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:11:31.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-stj75" for this suite. Dec 27 11:11:37.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:11:37.383: INFO: namespace: e2e-tests-configmap-stj75, resource: bindings, ignored listing per whitelist Dec 27 11:11:37.660: INFO: namespace e2e-tests-configmap-stj75 deletion completed in 6.416205641s • [SLOW TEST:18.383 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:11:37.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 11:11:37.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-jm9gf" to be "success or failure" Dec 27 11:11:37.925: INFO: Pod "downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.838088ms Dec 27 11:11:40.274: INFO: Pod "downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.362036684s Dec 27 11:11:42.290: INFO: Pod "downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378402355s Dec 27 11:11:44.305: INFO: Pod "downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.39351298s Dec 27 11:11:46.379: INFO: Pod "downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.466644389s Dec 27 11:11:48.393: INFO: Pod "downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.480744384s STEP: Saw pod success Dec 27 11:11:48.393: INFO: Pod "downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:11:48.429: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 11:11:48.634: INFO: Waiting for pod downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:11:48.642: INFO: Pod downwardapi-volume-a6a1dc55-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:11:48.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jm9gf" for this suite. Dec 27 11:11:54.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:11:54.957: INFO: namespace: e2e-tests-downward-api-jm9gf, resource: bindings, ignored listing per whitelist Dec 27 11:11:55.042: INFO: namespace e2e-tests-downward-api-jm9gf deletion completed in 6.39249236s • [SLOW TEST:17.381 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:11:55.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-djgfq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-djgfq to expose endpoints map[] Dec 27 11:11:55.696: INFO: Get endpoints failed (96.240117ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 27 11:11:56.710: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-djgfq exposes endpoints map[] (1.11057396s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-djgfq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-djgfq to expose endpoints map[pod1:[80]] Dec 27 11:12:01.472: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.739733522s elapsed, will retry) Dec 27 11:12:06.921: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-djgfq exposes endpoints map[pod1:[80]] (10.18944747s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-djgfq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-djgfq to expose endpoints map[pod1:[80] pod2:[80]] Dec 27 11:12:12.019: INFO: Unexpected endpoints: found map[b1da02d7-2899-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (5.075198629s elapsed, will retry) Dec 27 11:12:16.364: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-djgfq exposes endpoints map[pod1:[80] pod2:[80]] (9.420323053s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-djgfq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-djgfq to expose endpoints map[pod2:[80]] Dec 27 11:12:17.593: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-djgfq exposes endpoints map[pod2:[80]] (1.221470141s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-djgfq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-djgfq to expose endpoints map[] Dec 27 11:12:17.806: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-djgfq exposes endpoints map[] (197.358477ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:12:17.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-djgfq" for this suite. Dec 27 11:12:42.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:12:42.091: INFO: namespace: e2e-tests-services-djgfq, resource: bindings, ignored listing per whitelist Dec 27 11:12:42.175: INFO: namespace e2e-tests-services-djgfq deletion completed in 24.173352319s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:47.133 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:12:42.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-cd0e5d34-2899-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 11:12:42.375: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-8cqpl" to be "success or failure" Dec 27 11:12:42.387: INFO: Pod "pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.193223ms Dec 27 11:12:44.482: INFO: Pod "pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106274123s Dec 27 11:12:46.527: INFO: Pod "pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152120571s Dec 27 11:12:48.932: INFO: Pod "pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.556985733s Dec 27 11:12:50.955: INFO: Pod "pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.57944007s Dec 27 11:12:52.972: INFO: Pod "pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.596610987s STEP: Saw pod success Dec 27 11:12:52.972: INFO: Pod "pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:12:52.978: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 27 11:12:53.091: INFO: Waiting for pod pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:12:53.171: INFO: Pod pod-projected-secrets-cd0f8129-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:12:53.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8cqpl" for this suite. Dec 27 11:12:59.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:12:59.317: INFO: namespace: e2e-tests-projected-8cqpl, resource: bindings, ignored listing per whitelist Dec 27 11:12:59.380: INFO: namespace e2e-tests-projected-8cqpl deletion completed in 6.199020783s • [SLOW TEST:17.204 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:12:59.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d7567d43-2899-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 11:12:59.635: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-65xg5" to be "success or failure" Dec 27 11:12:59.644: INFO: Pod "pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.890705ms Dec 27 11:13:01.917: INFO: Pod "pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281509676s Dec 27 11:13:03.943: INFO: Pod "pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308143435s Dec 27 11:13:05.956: INFO: Pod "pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.32085144s Dec 27 11:13:08.510: INFO: Pod "pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.874592198s Dec 27 11:13:10.537: INFO: Pod "pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.902071468s STEP: Saw pod success Dec 27 11:13:10.537: INFO: Pod "pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:13:10.549: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 27 11:13:10.787: INFO: Waiting for pod pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005 to disappear Dec 27 11:13:10.831: INFO: Pod pod-projected-configmaps-d758507b-2899-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:13:10.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-65xg5" for this suite. Dec 27 11:13:16.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:13:17.159: INFO: namespace: e2e-tests-projected-65xg5, resource: bindings, ignored listing per whitelist Dec 27 11:13:17.218: INFO: namespace e2e-tests-projected-65xg5 deletion completed in 6.37934032s • [SLOW TEST:17.838 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:13:17.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 27 11:13:17.412: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:13:32.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-472c4" for this suite. Dec 27 11:13:41.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:13:41.301: INFO: namespace: e2e-tests-init-container-472c4, resource: bindings, ignored listing per whitelist Dec 27 11:13:41.324: INFO: namespace e2e-tests-init-container-472c4 deletion completed in 8.261987261s • [SLOW TEST:24.105 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:13:41.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:13:51.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-sjgws" for this suite. Dec 27 11:14:47.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:14:47.758: INFO: namespace: e2e-tests-kubelet-test-sjgws, resource: bindings, ignored listing per whitelist Dec 27 11:14:47.930: INFO: namespace e2e-tests-kubelet-test-sjgws deletion completed in 56.290375437s • [SLOW TEST:66.606 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:14:47.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-7lh99 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Dec 27 11:14:48.400: INFO: Found 0 stateful pods, waiting for 3 Dec 27 11:14:58.419: INFO: Found 2 stateful pods, waiting for 3 Dec 27 11:15:08.418: INFO: Found 2 stateful pods, waiting for 3 Dec 27 11:15:18.441: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 27 11:15:18.441: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 27 11:15:18.441: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 27 11:15:28.424: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 27 11:15:28.424: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 27 11:15:28.424: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 27 11:15:28.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7lh99 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 11:15:29.179: INFO: stderr: "" Dec 27 11:15:29.179: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 11:15:29.179: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 27 11:15:39.254: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 27 11:15:49.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7lh99 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 11:15:50.030: INFO: stderr: "" Dec 27 11:15:50.030: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 27 11:15:50.030: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 27 11:16:00.139: INFO: Waiting for StatefulSet e2e-tests-statefulset-7lh99/ss2 to complete update Dec 27 11:16:00.139: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 27 11:16:00.139: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 27 11:16:10.736: INFO: Waiting for StatefulSet e2e-tests-statefulset-7lh99/ss2 to complete update Dec 27 11:16:10.736: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 27 11:16:10.736: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 27 11:16:20.552: INFO: Waiting for StatefulSet e2e-tests-statefulset-7lh99/ss2 to complete update Dec 27 11:16:20.552: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 27 11:16:30.173: INFO: Waiting for StatefulSet e2e-tests-statefulset-7lh99/ss2 to complete update Dec 27 11:16:30.173: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 27 11:16:40.162: INFO: Waiting for StatefulSet e2e-tests-statefulset-7lh99/ss2 to complete update STEP: Rolling back to a previous revision Dec 27 11:16:50.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7lh99 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 11:16:50.940: INFO: stderr: "" Dec 27 11:16:50.940: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 11:16:50.940: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 27 11:17:01.064: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 27 11:17:11.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7lh99 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 11:17:12.562: INFO: stderr: "" Dec 27 11:17:12.563: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 27 11:17:12.563: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 27 11:17:22.669: INFO: Waiting for StatefulSet e2e-tests-statefulset-7lh99/ss2 to complete update Dec 27 11:17:22.669: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 27 11:17:22.669: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 27 11:17:32.685: INFO: Waiting for StatefulSet e2e-tests-statefulset-7lh99/ss2 to complete update Dec 27 11:17:32.685: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 27 11:17:32.685: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 27 11:17:52.761: INFO: Waiting for StatefulSet e2e-tests-statefulset-7lh99/ss2 to complete update Dec 27 11:17:52.761: INFO: Waiting for Pod e2e-tests-statefulset-7lh99/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 27 11:18:12.688: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7lh99 Dec 27 11:18:12.692: INFO: Scaling statefulset ss2 to 0 Dec 27 11:18:42.739: INFO: Waiting for statefulset status.replicas updated to 0 Dec 27 11:18:42.743: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:18:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-7lh99" for this suite. Dec 27 11:18:52.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:18:53.033: INFO: namespace: e2e-tests-statefulset-7lh99, resource: bindings, ignored listing per whitelist Dec 27 11:18:53.060: INFO: namespace e2e-tests-statefulset-7lh99 deletion completed in 10.214614216s • [SLOW TEST:245.129 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:18:53.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 11:18:53.275: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-jnwks" to be "success or failure" Dec 27 11:18:53.567: INFO: Pod "downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 292.112319ms Dec 27 11:18:55.577: INFO: Pod "downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301884914s Dec 27 11:18:57.604: INFO: Pod "downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329117745s Dec 27 11:19:00.266: INFO: Pod "downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.990646619s Dec 27 11:19:02.336: INFO: Pod "downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.061057511s Dec 27 11:19:04.347: INFO: Pod "downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.072178962s STEP: Saw pod success Dec 27 11:19:04.347: INFO: Pod "downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:19:04.351: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 11:19:04.635: INFO: Waiting for pod downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005 to disappear Dec 27 11:19:04.718: INFO: Pod downwardapi-volume-aa20008b-289a-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:19:04.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jnwks" for this suite. Dec 27 11:19:10.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:19:10.614: INFO: namespace: e2e-tests-projected-jnwks, resource: bindings, ignored listing per whitelist Dec 27 11:19:10.646: INFO: namespace e2e-tests-projected-jnwks deletion completed in 5.91695435s • [SLOW TEST:17.586 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:19:10.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Dec 27 11:19:10.817: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 27 11:19:10.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:13.119: INFO: stderr: "" Dec 27 11:19:13.119: INFO: stdout: "service/redis-slave created\n" Dec 27 11:19:13.119: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 27 11:19:13.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:13.546: INFO: stderr: "" Dec 27 11:19:13.546: INFO: stdout: "service/redis-master created\n" Dec 27 11:19:13.547: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 27 11:19:13.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:14.049: INFO: stderr: "" Dec 27 11:19:14.049: INFO: stdout: "service/frontend created\n" Dec 27 11:19:14.050: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 27 11:19:14.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:14.526: INFO: stderr: "" Dec 27 11:19:14.526: INFO: stdout: "deployment.extensions/frontend created\n" Dec 27 11:19:14.527: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 27 11:19:14.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:15.015: INFO: stderr: "" Dec 27 11:19:15.015: INFO: stdout: "deployment.extensions/redis-master created\n" Dec 27 11:19:15.016: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 27 11:19:15.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:15.508: INFO: stderr: "" Dec 27 11:19:15.508: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Dec 27 11:19:15.508: INFO: Waiting for all frontend pods to be Running. Dec 27 11:19:45.559: INFO: Waiting for frontend to serve content. Dec 27 11:19:48.304: INFO: Trying to add a new entry to the guestbook. Dec 27 11:19:48.330: INFO: Verifying that added entry can be retrieved. Dec 27 11:19:48.359: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Dec 27 11:19:53.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:54.103: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 27 11:19:54.103: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 27 11:19:54.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:54.502: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 27 11:19:54.502: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 27 11:19:54.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:54.769: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 27 11:19:54.769: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 27 11:19:54.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:54.910: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 27 11:19:54.910: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 27 11:19:54.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:57.840: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 27 11:19:57.840: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 27 11:19:57.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-skh4m' Dec 27 11:19:58.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 27 11:19:58.157: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:19:58.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-skh4m" for this suite. Dec 27 11:22:08.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:22:08.686: INFO: namespace: e2e-tests-kubectl-skh4m, resource: bindings, ignored listing per whitelist Dec 27 11:22:08.764: INFO: namespace e2e-tests-kubectl-skh4m deletion completed in 2m10.450894575s • [SLOW TEST:178.117 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:22:08.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 11:22:09.146: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 27 11:22:14.157: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 27 11:22:38.166: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 27 11:22:40.954: INFO: Creating deployment "test-rollover-deployment" Dec 27 11:22:44.026: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 27 11:23:17.046: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 27 11:23:19.794: INFO: Ensure that both replica sets have 1 created replica Dec 27 11:23:20.835: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 27 11:23:20.883: INFO: Updating deployment test-rollover-deployment Dec 27 11:23:20.884: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 27 11:24:11.570: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 27 11:24:11.576: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 27 11:24:11.581: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:11.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042650, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:13.599: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:13.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042650, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:15.595: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:15.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042650, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:17.994: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:17.994: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042650, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:20.174: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:20.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042650, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:21.602: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:21.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042650, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:26.046: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:26.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042650, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:27.595: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:27.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042666, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:29.596: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:29.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042666, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:33.168: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:33.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042666, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:33.606: INFO: all replica sets need to contain the pod-template-hash label Dec 27 11:24:33.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042666, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713042564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 27 11:24:36.156: INFO: Dec 27 11:24:36.156: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 27 11:24:36.560: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-gpcrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gpcrs/deployments/test-rollover-deployment,UID:31d9cc5d-289b-11ea-a994-fa163e34d433,ResourceVersion:16229913,Generation:2,CreationTimestamp:2019-12-27 11:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-27 11:22:44 +0000 UTC 2019-12-27 11:22:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-27 11:24:34 +0000 UTC 2019-12-27 11:22:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 27 11:24:36.581: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-gpcrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gpcrs/replicasets/test-rollover-deployment-5b8479fdb6,UID:66a139f9-289b-11ea-a994-fa163e34d433,ResourceVersion:16229904,Generation:2,CreationTimestamp:2019-12-27 11:24:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 31d9cc5d-289b-11ea-a994-fa163e34d433 0xc00157c777 0xc00157c778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 27 11:24:36.581: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 27 11:24:36.582: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-gpcrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gpcrs/replicasets/test-rollover-controller,UID:1ed14ccb-289b-11ea-a994-fa163e34d433,ResourceVersion:16229912,Generation:2,CreationTimestamp:2019-12-27 11:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 31d9cc5d-289b-11ea-a994-fa163e34d433 0xc00157c217 0xc00157c218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 27 11:24:36.582: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-gpcrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gpcrs/replicasets/test-rollover-deployment-58494b7559,UID:33be2b18-289b-11ea-a994-fa163e34d433,ResourceVersion:16229863,Generation:2,CreationTimestamp:2019-12-27 11:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 31d9cc5d-289b-11ea-a994-fa163e34d433 0xc00157c3c7 0xc00157c3c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 27 11:24:36.602: INFO: Pod "test-rollover-deployment-5b8479fdb6-m8lls" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-m8lls,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-gpcrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gpcrs/pods/test-rollover-deployment-5b8479fdb6-m8lls,UID:66db0846-289b-11ea-a994-fa163e34d433,ResourceVersion:16229890,Generation:0,CreationTimestamp:2019-12-27 11:24:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 66a139f9-289b-11ea-a994-fa163e34d433 0xc001809977 0xc001809978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ksc9q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ksc9q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-ksc9q true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001809a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001809a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 11:24:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 11:24:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 11:24:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 11:24:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-27 11:24:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-27 11:24:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9ae79bbcf38591434e552878c832fbc1fb4896e460a4987588cbb370a429e85f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:24:36.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gpcrs" for this suite. Dec 27 11:26:10.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:26:10.852: INFO: namespace: e2e-tests-deployment-gpcrs, resource: bindings, ignored listing per whitelist Dec 27 11:26:10.867: INFO: namespace e2e-tests-deployment-gpcrs deletion completed in 1m34.152868381s • [SLOW TEST:242.103 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:26:10.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 27 11:26:11.183: INFO: Waiting up to 5m0s for pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-848f9" to be "success or failure" Dec 27 11:26:11.199: INFO: Pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.877293ms Dec 27 11:26:13.219: INFO: Pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036534647s Dec 27 11:26:15.234: INFO: Pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051737288s Dec 27 11:26:17.279: INFO: Pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096570711s Dec 27 11:26:19.315: INFO: Pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132336958s Dec 27 11:26:21.450: INFO: Pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.267577443s Dec 27 11:26:23.810: INFO: Pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 12.627034041s Dec 27 11:26:27.569: INFO: Pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.385871342s STEP: Saw pod success Dec 27 11:26:27.569: INFO: Pod "pod-af24ca0a-289b-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:26:28.048: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-af24ca0a-289b-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 11:26:28.197: INFO: Waiting for pod pod-af24ca0a-289b-11ea-bad5-0242ac110005 to disappear Dec 27 11:26:28.212: INFO: Pod pod-af24ca0a-289b-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:26:28.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-848f9" for this suite. Dec 27 11:26:34.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:26:34.372: INFO: namespace: e2e-tests-emptydir-848f9, resource: bindings, ignored listing per whitelist Dec 27 11:26:34.538: INFO: namespace e2e-tests-emptydir-848f9 deletion completed in 6.321305868s • [SLOW TEST:23.671 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:26:34.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 11:27:11.711: INFO: Container started at 2019-12-27 11:26:48 +0000 UTC, pod became ready at 2019-12-27 11:27:09 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:27:11.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5j7dw" for this suite. Dec 27 11:27:39.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:27:39.780: INFO: namespace: e2e-tests-container-probe-5j7dw, resource: bindings, ignored listing per whitelist Dec 27 11:27:39.794: INFO: namespace e2e-tests-container-probe-5j7dw deletion completed in 26.746776367s • [SLOW TEST:65.256 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:27:39.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 11:27:40.269: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 27 11:27:40.299: INFO: Number of nodes with available pods: 0 Dec 27 11:27:40.299: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 27 11:27:40.697: INFO: Number of nodes with available pods: 0 Dec 27 11:27:40.697: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:41.715: INFO: Number of nodes with available pods: 0 Dec 27 11:27:41.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:43.452: INFO: Number of nodes with available pods: 0 Dec 27 11:27:43.453: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:44.294: INFO: Number of nodes with available pods: 0 Dec 27 11:27:44.294: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:44.781: INFO: Number of nodes with available pods: 0 Dec 27 11:27:44.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:45.707: INFO: Number of nodes with available pods: 0 Dec 27 11:27:45.707: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:46.735: INFO: Number of nodes with available pods: 0 Dec 27 11:27:46.735: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:47.725: INFO: Number of nodes with available pods: 0 Dec 27 11:27:47.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:53.464: INFO: Number of nodes with available pods: 0 Dec 27 11:27:53.464: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:54.100: INFO: Number of nodes with available pods: 0 Dec 27 11:27:54.100: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:54.719: INFO: Number of nodes with available pods: 0 Dec 27 11:27:54.719: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:55.732: INFO: Number of nodes with available pods: 0 Dec 27 11:27:55.732: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:56.729: INFO: Number of nodes with available pods: 0 Dec 27 11:27:56.729: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:27:59.048: INFO: Number of nodes with available pods: 0 Dec 27 11:27:59.048: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:00.433: INFO: Number of nodes with available pods: 0 Dec 27 11:28:00.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:00.975: INFO: Number of nodes with available pods: 0 Dec 27 11:28:00.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:01.717: INFO: Number of nodes with available pods: 1 Dec 27 11:28:01.717: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 27 11:28:01.830: INFO: Number of nodes with available pods: 1 Dec 27 11:28:01.830: INFO: Number of running nodes: 0, number of available pods: 1 Dec 27 11:28:02.863: INFO: Number of nodes with available pods: 0 Dec 27 11:28:02.863: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 27 11:28:03.092: INFO: Number of nodes with available pods: 0 Dec 27 11:28:03.092: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:04.629: INFO: Number of nodes with available pods: 0 Dec 27 11:28:04.629: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:05.107: INFO: Number of nodes with available pods: 0 Dec 27 11:28:05.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:06.570: INFO: Number of nodes with available pods: 0 Dec 27 11:28:06.570: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:09.268: INFO: Number of nodes with available pods: 0 Dec 27 11:28:09.268: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:12.491: INFO: Number of nodes with available pods: 0 Dec 27 11:28:12.491: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:13.106: INFO: Number of nodes with available pods: 0 Dec 27 11:28:13.106: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:15.479: INFO: Number of nodes with available pods: 0 Dec 27 11:28:15.479: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:16.250: INFO: Number of nodes with available pods: 0 Dec 27 11:28:16.251: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:17.221: INFO: Number of nodes with available pods: 0 Dec 27 11:28:17.221: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:18.405: INFO: Number of nodes with available pods: 0 Dec 27 11:28:18.405: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:19.102: INFO: Number of nodes with available pods: 0 Dec 27 11:28:19.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:27.939: INFO: Number of nodes with available pods: 0 Dec 27 11:28:27.939: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:37.118: INFO: Number of nodes with available pods: 0 Dec 27 11:28:37.118: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:38.261: INFO: Number of nodes with available pods: 0 Dec 27 11:28:38.261: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:39.137: INFO: Number of nodes with available pods: 0 Dec 27 11:28:39.137: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:40.102: INFO: Number of nodes with available pods: 0 Dec 27 11:28:40.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:42.246: INFO: Number of nodes with available pods: 0 Dec 27 11:28:42.246: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:43.106: INFO: Number of nodes with available pods: 0 Dec 27 11:28:43.106: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:44.101: INFO: Number of nodes with available pods: 0 Dec 27 11:28:44.101: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:46.230: INFO: Number of nodes with available pods: 0 Dec 27 11:28:46.230: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:47.143: INFO: Number of nodes with available pods: 0 Dec 27 11:28:47.143: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:28:48.103: INFO: Number of nodes with available pods: 1 Dec 27 11:28:48.103: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-22kqb, will wait for the garbage collector to delete the pods Dec 27 11:28:48.179: INFO: Deleting DaemonSet.extensions daemon-set took: 17.303172ms Dec 27 11:30:06.279: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1m18.100377878s Dec 27 11:30:19.495: INFO: Number of nodes with available pods: 0 Dec 27 11:30:19.495: INFO: Number of running nodes: 0, number of available pods: 0 Dec 27 11:30:19.504: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-22kqb/daemonsets","resourceVersion":"16230325"},"items":null} Dec 27 11:30:19.511: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-22kqb/pods","resourceVersion":"16230325"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:30:19.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-22kqb" for this suite. Dec 27 11:30:27.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:30:27.368: INFO: namespace: e2e-tests-daemonsets-22kqb, resource: bindings, ignored listing per whitelist Dec 27 11:30:27.570: INFO: namespace e2e-tests-daemonsets-22kqb deletion completed in 7.763738608s • [SLOW TEST:167.776 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:30:27.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-86tf9 Dec 27 11:30:54.117: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-86tf9 STEP: checking the pod's current state and verifying that restartCount is present Dec 27 11:30:54.139: INFO: Initial restart count of pod liveness-http is 0 Dec 27 11:31:40.774: INFO: Restart count of pod e2e-tests-container-probe-86tf9/liveness-http is now 1 (46.634189419s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:31:40.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-86tf9" for this suite. Dec 27 11:33:51.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:33:51.209: INFO: namespace: e2e-tests-container-probe-86tf9, resource: bindings, ignored listing per whitelist Dec 27 11:33:51.209: INFO: namespace e2e-tests-container-probe-86tf9 deletion completed in 2m10.182629466s • [SLOW TEST:203.639 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:33:51.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 11:33:54.733: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-k222b" to be "success or failure" Dec 27 11:33:54.760: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.229088ms Dec 27 11:33:57.515: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.781589932s Dec 27 11:33:59.562: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.828747956s Dec 27 11:34:03.207: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.473883392s Dec 27 11:34:05.244: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.510753563s Dec 27 11:34:07.284: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.551299361s Dec 27 11:34:15.680: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.947559514s Dec 27 11:34:18.174: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.441423639s Dec 27 11:34:20.193: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.460450868s Dec 27 11:34:22.218: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.485350006s Dec 27 11:34:27.701: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.968410527s Dec 27 11:34:29.716: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.982621839s STEP: Saw pod success Dec 27 11:34:29.716: INFO: Pod "downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:34:29.724: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 11:34:29.984: INFO: Waiting for pod downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005 to disappear Dec 27 11:34:30.014: INFO: Pod downwardapi-volume-c1ec81ad-289c-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:34:30.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-k222b" for this suite. Dec 27 11:34:40.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:34:40.209: INFO: namespace: e2e-tests-downward-api-k222b, resource: bindings, ignored listing per whitelist Dec 27 11:34:40.231: INFO: namespace e2e-tests-downward-api-k222b deletion completed in 10.195116807s • [SLOW TEST:49.022 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:34:40.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 27 11:34:40.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hzwkt' Dec 27 11:34:44.758: INFO: stderr: "" Dec 27 11:34:44.758: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Dec 27 11:35:04.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hzwkt -o json' Dec 27 11:35:06.277: INFO: stderr: "" Dec 27 11:35:06.277: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-12-27T11:34:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-hzwkt\",\n \"resourceVersion\": \"16230585\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-hzwkt/pods/e2e-test-nginx-pod\",\n \"uid\": \"e13d204c-289c-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-8qljn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-8qljn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-8qljn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-27T11:34:45Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-27T11:35:00Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-27T11:35:00Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-27T11:34:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://806bee8a1f530d86f54f4686a5e7e6bd00ba48951bc77bd5c7603fbfe9637be6\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-12-27T11:34:59Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-12-27T11:34:45Z\"\n }\n}\n" STEP: replace the image in the pod Dec 27 11:35:06.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-hzwkt' Dec 27 11:35:06.849: INFO: stderr: "" Dec 27 11:35:06.849: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Dec 27 11:35:06.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hzwkt' Dec 27 11:35:19.970: INFO: stderr: "" Dec 27 11:35:19.970: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:35:19.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hzwkt" for this suite. Dec 27 11:35:26.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:35:26.268: INFO: namespace: e2e-tests-kubectl-hzwkt, resource: bindings, ignored listing per whitelist Dec 27 11:35:26.307: INFO: namespace e2e-tests-kubectl-hzwkt deletion completed in 6.319404661s • [SLOW TEST:46.076 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:35:26.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1227 11:35:30.582363 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 27 11:35:30.582: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:35:30.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wjthd" for this suite. Dec 27 11:35:38.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:35:38.735: INFO: namespace: e2e-tests-gc-wjthd, resource: bindings, ignored listing per whitelist Dec 27 11:35:38.899: INFO: namespace e2e-tests-gc-wjthd deletion completed in 8.27567261s • [SLOW TEST:12.591 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:35:38.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-01ae6fcc-289d-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 11:35:39.211: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-t9vxc" to be "success or failure" Dec 27 11:35:39.229: INFO: Pod "pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.902045ms Dec 27 11:35:41.610: INFO: Pod "pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399035662s Dec 27 11:35:43.669: INFO: Pod "pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.457677309s Dec 27 11:35:46.002: INFO: Pod "pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.790379905s Dec 27 11:35:48.250: INFO: Pod "pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.038506064s Dec 27 11:35:50.268: INFO: Pod "pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.056806717s Dec 27 11:35:52.460: INFO: Pod "pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.249184186s STEP: Saw pod success Dec 27 11:35:52.460: INFO: Pod "pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:35:52.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 27 11:35:54.092: INFO: Waiting for pod pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005 to disappear Dec 27 11:35:54.284: INFO: Pod pod-projected-configmaps-01b77be7-289d-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:35:54.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t9vxc" for this suite. Dec 27 11:36:02.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:36:03.329: INFO: namespace: e2e-tests-projected-t9vxc, resource: bindings, ignored listing per whitelist Dec 27 11:36:04.509: INFO: namespace e2e-tests-projected-t9vxc deletion completed in 10.210755054s • [SLOW TEST:25.610 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:36:04.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 27 11:36:04.986: INFO: Waiting up to 5m0s for pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-6vjj9" to be "success or failure" Dec 27 11:36:05.008: INFO: Pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.846763ms Dec 27 11:36:07.367: INFO: Pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380780565s Dec 27 11:36:11.425: INFO: Pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439424267s Dec 27 11:36:13.444: INFO: Pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.457780451s Dec 27 11:36:16.969: INFO: Pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.983246122s Dec 27 11:36:18.986: INFO: Pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.999812072s Dec 27 11:36:20.996: INFO: Pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009763043s Dec 27 11:36:23.175: INFO: Pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.189559142s STEP: Saw pod success Dec 27 11:36:23.175: INFO: Pod "pod-10f8c6bd-289d-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:36:23.189: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-10f8c6bd-289d-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 11:36:24.379: INFO: Waiting for pod pod-10f8c6bd-289d-11ea-bad5-0242ac110005 to disappear Dec 27 11:36:24.735: INFO: Pod pod-10f8c6bd-289d-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:36:24.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6vjj9" for this suite. Dec 27 11:36:32.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:36:32.993: INFO: namespace: e2e-tests-emptydir-6vjj9, resource: bindings, ignored listing per whitelist Dec 27 11:36:33.063: INFO: namespace e2e-tests-emptydir-6vjj9 deletion completed in 8.301832815s • [SLOW TEST:28.553 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:36:33.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 11:36:33.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 27 11:36:33.395: INFO: stderr: "" Dec 27 11:36:33.395: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:36:33.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-475r7" for this suite. Dec 27 11:36:39.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:36:39.610: INFO: namespace: e2e-tests-kubectl-475r7, resource: bindings, ignored listing per whitelist Dec 27 11:36:39.610: INFO: namespace e2e-tests-kubectl-475r7 deletion completed in 6.201578779s • [SLOW TEST:6.547 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:36:39.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 11:36:39.955: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Dec 27 11:36:39.961: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-h67c6/daemonsets","resourceVersion":"16230810"},"items":null} Dec 27 11:36:39.964: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-h67c6/pods","resourceVersion":"16230810"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:36:40.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-h67c6" for this suite. Dec 27 11:36:46.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:36:46.343: INFO: namespace: e2e-tests-daemonsets-h67c6, resource: bindings, ignored listing per whitelist Dec 27 11:36:46.416: INFO: namespace e2e-tests-daemonsets-h67c6 deletion completed in 6.319742657s S [SKIPPING] [6.805 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 11:36:39.955: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:36:46.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Dec 27 11:36:46.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:36:47.236: INFO: stderr: "" Dec 27 11:36:47.236: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 27 11:36:47.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:36:47.389: INFO: stderr: "" Dec 27 11:36:47.389: INFO: stdout: "update-demo-nautilus-cfdsw update-demo-nautilus-wzrff " Dec 27 11:36:47.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfdsw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:36:47.630: INFO: stderr: "" Dec 27 11:36:47.630: INFO: stdout: "" Dec 27 11:36:47.630: INFO: update-demo-nautilus-cfdsw is created but not running Dec 27 11:36:52.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:36:52.715: INFO: stderr: "" Dec 27 11:36:52.715: INFO: stdout: "update-demo-nautilus-cfdsw update-demo-nautilus-wzrff " Dec 27 11:36:52.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfdsw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:36:52.823: INFO: stderr: "" Dec 27 11:36:52.823: INFO: stdout: "" Dec 27 11:36:52.823: INFO: update-demo-nautilus-cfdsw is created but not running Dec 27 11:36:57.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:36:58.172: INFO: stderr: "" Dec 27 11:36:58.172: INFO: stdout: "update-demo-nautilus-cfdsw update-demo-nautilus-wzrff " Dec 27 11:36:58.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfdsw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:36:58.297: INFO: stderr: "" Dec 27 11:36:58.297: INFO: stdout: "" Dec 27 11:36:58.297: INFO: update-demo-nautilus-cfdsw is created but not running Dec 27 11:37:03.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:03.753: INFO: stderr: "" Dec 27 11:37:03.753: INFO: stdout: "update-demo-nautilus-cfdsw update-demo-nautilus-wzrff " Dec 27 11:37:03.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfdsw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:03.951: INFO: stderr: "" Dec 27 11:37:03.951: INFO: stdout: "" Dec 27 11:37:03.951: INFO: update-demo-nautilus-cfdsw is created but not running Dec 27 11:37:08.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:09.096: INFO: stderr: "" Dec 27 11:37:09.096: INFO: stdout: "update-demo-nautilus-cfdsw update-demo-nautilus-wzrff " Dec 27 11:37:09.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfdsw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:09.255: INFO: stderr: "" Dec 27 11:37:09.255: INFO: stdout: "true" Dec 27 11:37:09.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfdsw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:09.500: INFO: stderr: "" Dec 27 11:37:09.500: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 11:37:09.500: INFO: validating pod update-demo-nautilus-cfdsw Dec 27 11:37:09.580: INFO: got data: { "image": "nautilus.jpg" } Dec 27 11:37:09.580: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 11:37:09.580: INFO: update-demo-nautilus-cfdsw is verified up and running Dec 27 11:37:09.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:09.680: INFO: stderr: "" Dec 27 11:37:09.681: INFO: stdout: "true" Dec 27 11:37:09.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:09.871: INFO: stderr: "" Dec 27 11:37:09.872: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 11:37:09.872: INFO: validating pod update-demo-nautilus-wzrff Dec 27 11:37:09.929: INFO: got data: { "image": "nautilus.jpg" } Dec 27 11:37:09.929: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 11:37:09.929: INFO: update-demo-nautilus-wzrff is verified up and running STEP: scaling down the replication controller Dec 27 11:37:10.020: INFO: scanned /root for discovery docs: Dec 27 11:37:10.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:11.319: INFO: stderr: "" Dec 27 11:37:11.319: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 27 11:37:11.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:11.440: INFO: stderr: "" Dec 27 11:37:11.440: INFO: stdout: "update-demo-nautilus-cfdsw update-demo-nautilus-wzrff " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 27 11:37:16.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:16.865: INFO: stderr: "" Dec 27 11:37:16.865: INFO: stdout: "update-demo-nautilus-cfdsw update-demo-nautilus-wzrff " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 27 11:37:21.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:21.980: INFO: stderr: "" Dec 27 11:37:21.980: INFO: stdout: "update-demo-nautilus-wzrff " Dec 27 11:37:21.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:22.114: INFO: stderr: "" Dec 27 11:37:22.114: INFO: stdout: "true" Dec 27 11:37:22.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:22.265: INFO: stderr: "" Dec 27 11:37:22.265: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 11:37:22.265: INFO: validating pod update-demo-nautilus-wzrff Dec 27 11:37:22.285: INFO: got data: { "image": "nautilus.jpg" } Dec 27 11:37:22.285: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 11:37:22.285: INFO: update-demo-nautilus-wzrff is verified up and running STEP: scaling up the replication controller Dec 27 11:37:22.290: INFO: scanned /root for discovery docs: Dec 27 11:37:22.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:23.667: INFO: stderr: "" Dec 27 11:37:23.667: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 27 11:37:23.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:23.816: INFO: stderr: "" Dec 27 11:37:23.816: INFO: stdout: "update-demo-nautilus-wzrff update-demo-nautilus-xdjp5 " Dec 27 11:37:23.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:23.943: INFO: stderr: "" Dec 27 11:37:23.943: INFO: stdout: "true" Dec 27 11:37:23.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:24.049: INFO: stderr: "" Dec 27 11:37:24.049: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 11:37:24.049: INFO: validating pod update-demo-nautilus-wzrff Dec 27 11:37:24.059: INFO: got data: { "image": "nautilus.jpg" } Dec 27 11:37:24.059: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 11:37:24.059: INFO: update-demo-nautilus-wzrff is verified up and running Dec 27 11:37:24.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdjp5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:24.143: INFO: stderr: "" Dec 27 11:37:24.143: INFO: stdout: "" Dec 27 11:37:24.143: INFO: update-demo-nautilus-xdjp5 is created but not running Dec 27 11:37:29.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:29.288: INFO: stderr: "" Dec 27 11:37:29.288: INFO: stdout: "update-demo-nautilus-wzrff update-demo-nautilus-xdjp5 " Dec 27 11:37:29.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:29.425: INFO: stderr: "" Dec 27 11:37:29.425: INFO: stdout: "true" Dec 27 11:37:29.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:29.546: INFO: stderr: "" Dec 27 11:37:29.546: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 11:37:29.546: INFO: validating pod update-demo-nautilus-wzrff Dec 27 11:37:29.568: INFO: got data: { "image": "nautilus.jpg" } Dec 27 11:37:29.568: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 11:37:29.568: INFO: update-demo-nautilus-wzrff is verified up and running Dec 27 11:37:29.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdjp5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:29.694: INFO: stderr: "" Dec 27 11:37:29.694: INFO: stdout: "" Dec 27 11:37:29.694: INFO: update-demo-nautilus-xdjp5 is created but not running Dec 27 11:37:34.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:34.794: INFO: stderr: "" Dec 27 11:37:34.794: INFO: stdout: "update-demo-nautilus-wzrff update-demo-nautilus-xdjp5 " Dec 27 11:37:34.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:34.887: INFO: stderr: "" Dec 27 11:37:34.887: INFO: stdout: "true" Dec 27 11:37:34.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:34.987: INFO: stderr: "" Dec 27 11:37:34.987: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 11:37:34.987: INFO: validating pod update-demo-nautilus-wzrff Dec 27 11:37:35.005: INFO: got data: { "image": "nautilus.jpg" } Dec 27 11:37:35.005: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 11:37:35.005: INFO: update-demo-nautilus-wzrff is verified up and running Dec 27 11:37:35.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdjp5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:35.131: INFO: stderr: "" Dec 27 11:37:35.131: INFO: stdout: "" Dec 27 11:37:35.131: INFO: update-demo-nautilus-xdjp5 is created but not running Dec 27 11:37:40.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:40.280: INFO: stderr: "" Dec 27 11:37:40.280: INFO: stdout: "update-demo-nautilus-wzrff update-demo-nautilus-xdjp5 " Dec 27 11:37:40.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:40.611: INFO: stderr: "" Dec 27 11:37:40.611: INFO: stdout: "true" Dec 27 11:37:40.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzrff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:40.761: INFO: stderr: "" Dec 27 11:37:40.761: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 11:37:40.761: INFO: validating pod update-demo-nautilus-wzrff Dec 27 11:37:40.779: INFO: got data: { "image": "nautilus.jpg" } Dec 27 11:37:40.779: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 11:37:40.779: INFO: update-demo-nautilus-wzrff is verified up and running Dec 27 11:37:40.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdjp5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:40.898: INFO: stderr: "" Dec 27 11:37:40.898: INFO: stdout: "true" Dec 27 11:37:40.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdjp5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:41.021: INFO: stderr: "" Dec 27 11:37:41.022: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 27 11:37:41.022: INFO: validating pod update-demo-nautilus-xdjp5 Dec 27 11:37:41.034: INFO: got data: { "image": "nautilus.jpg" } Dec 27 11:37:41.034: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 27 11:37:41.034: INFO: update-demo-nautilus-xdjp5 is verified up and running STEP: using delete to clean up resources Dec 27 11:37:41.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:41.124: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 27 11:37:41.124: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 27 11:37:41.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-z5ztg' Dec 27 11:37:41.422: INFO: stderr: "No resources found.\n" Dec 27 11:37:41.423: INFO: stdout: "" Dec 27 11:37:41.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-z5ztg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 27 11:37:41.765: INFO: stderr: "" Dec 27 11:37:41.765: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:37:41.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z5ztg" for this suite. Dec 27 11:38:09.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:38:09.689: INFO: namespace: e2e-tests-kubectl-z5ztg, resource: bindings, ignored listing per whitelist Dec 27 11:38:09.819: INFO: namespace e2e-tests-kubectl-z5ztg deletion completed in 28.01971617s • [SLOW TEST:83.404 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:38:09.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-68g2r STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 27 11:38:10.265: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 27 11:39:02.874: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-68g2r PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 27 11:39:02.874: INFO: >>> kubeConfig: /root/.kube/config Dec 27 11:39:03.541: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:39:03.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-68g2r" for this suite. Dec 27 11:39:37.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:39:37.746: INFO: namespace: e2e-tests-pod-network-test-68g2r, resource: bindings, ignored listing per whitelist Dec 27 11:39:37.752: INFO: namespace e2e-tests-pod-network-test-68g2r deletion completed in 34.197289484s • [SLOW TEST:87.931 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:39:37.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 27 11:39:38.158: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-gjfwn" to be "success or failure" Dec 27 11:39:38.207: INFO: Pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 49.010493ms Dec 27 11:39:40.243: INFO: Pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085000158s Dec 27 11:39:42.313: INFO: Pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155705407s Dec 27 11:39:44.322: INFO: Pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164606244s Dec 27 11:39:46.343: INFO: Pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185563473s Dec 27 11:39:48.418: INFO: Pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.260699977s Dec 27 11:39:50.594: INFO: Pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.436824313s Dec 27 11:39:53.556: INFO: Pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.398294187s STEP: Saw pod success Dec 27 11:39:53.556: INFO: Pod "downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:39:53.581: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005 container client-container: STEP: delete the pod Dec 27 11:39:53.932: INFO: Waiting for pod downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005 to disappear Dec 27 11:39:54.134: INFO: Pod downwardapi-volume-90206e39-289d-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:39:54.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gjfwn" for this suite. Dec 27 11:40:02.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:40:02.347: INFO: namespace: e2e-tests-projected-gjfwn, resource: bindings, ignored listing per whitelist Dec 27 11:40:02.396: INFO: namespace e2e-tests-projected-gjfwn deletion completed in 8.224745169s • [SLOW TEST:24.644 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:40:02.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 27 11:40:17.215: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9ebe78ef-289d-11ea-bad5-0242ac110005" Dec 27 11:40:17.215: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9ebe78ef-289d-11ea-bad5-0242ac110005" in namespace "e2e-tests-pods-lnnp8" to be "terminated due to deadline exceeded" Dec 27 11:40:17.233: INFO: Pod "pod-update-activedeadlineseconds-9ebe78ef-289d-11ea-bad5-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 17.577314ms Dec 27 11:40:19.434: INFO: Pod "pod-update-activedeadlineseconds-9ebe78ef-289d-11ea-bad5-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.218414474s Dec 27 11:40:19.434: INFO: Pod "pod-update-activedeadlineseconds-9ebe78ef-289d-11ea-bad5-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:40:19.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-lnnp8" for this suite. Dec 27 11:40:27.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:40:27.962: INFO: namespace: e2e-tests-pods-lnnp8, resource: bindings, ignored listing per whitelist Dec 27 11:40:28.045: INFO: namespace e2e-tests-pods-lnnp8 deletion completed in 8.586014521s • [SLOW TEST:25.648 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:40:28.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 11:40:29.484: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 27 11:40:29.504: INFO: Number of nodes with available pods: 0 Dec 27 11:40:29.504: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:30.524: INFO: Number of nodes with available pods: 0 Dec 27 11:40:30.524: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:32.328: INFO: Number of nodes with available pods: 0 Dec 27 11:40:32.328: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:32.732: INFO: Number of nodes with available pods: 0 Dec 27 11:40:32.732: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:33.562: INFO: Number of nodes with available pods: 0 Dec 27 11:40:33.562: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:34.941: INFO: Number of nodes with available pods: 0 Dec 27 11:40:34.941: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:37.377: INFO: Number of nodes with available pods: 0 Dec 27 11:40:37.378: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:37.532: INFO: Number of nodes with available pods: 0 Dec 27 11:40:37.532: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:38.565: INFO: Number of nodes with available pods: 0 Dec 27 11:40:38.565: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:39.522: INFO: Number of nodes with available pods: 0 Dec 27 11:40:39.522: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:40.554: INFO: Number of nodes with available pods: 0 Dec 27 11:40:40.554: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:41.540: INFO: Number of nodes with available pods: 1 Dec 27 11:40:41.540: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 27 11:40:41.607: INFO: Wrong image for pod: daemon-set-r6hhv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 27 11:40:42.654: INFO: Wrong image for pod: daemon-set-r6hhv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 27 11:40:43.659: INFO: Wrong image for pod: daemon-set-r6hhv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 27 11:40:44.718: INFO: Wrong image for pod: daemon-set-r6hhv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 27 11:40:45.660: INFO: Wrong image for pod: daemon-set-r6hhv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 27 11:40:46.654: INFO: Wrong image for pod: daemon-set-r6hhv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 27 11:40:47.652: INFO: Wrong image for pod: daemon-set-r6hhv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 27 11:40:48.655: INFO: Wrong image for pod: daemon-set-r6hhv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 27 11:40:48.655: INFO: Pod daemon-set-r6hhv is not available Dec 27 11:40:49.656: INFO: Pod daemon-set-9szsj is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 27 11:40:49.716: INFO: Number of nodes with available pods: 0 Dec 27 11:40:49.716: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:50.736: INFO: Number of nodes with available pods: 0 Dec 27 11:40:50.736: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:51.727: INFO: Number of nodes with available pods: 0 Dec 27 11:40:51.727: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:52.736: INFO: Number of nodes with available pods: 0 Dec 27 11:40:52.736: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:53.766: INFO: Number of nodes with available pods: 0 Dec 27 11:40:53.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:54.765: INFO: Number of nodes with available pods: 0 Dec 27 11:40:54.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:56.294: INFO: Number of nodes with available pods: 0 Dec 27 11:40:56.294: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:57.998: INFO: Number of nodes with available pods: 0 Dec 27 11:40:57.998: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:58.975: INFO: Number of nodes with available pods: 0 Dec 27 11:40:58.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:40:59.733: INFO: Number of nodes with available pods: 0 Dec 27 11:40:59.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 11:41:00.775: INFO: Number of nodes with available pods: 1 Dec 27 11:41:00.775: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-s6bvf, will wait for the garbage collector to delete the pods Dec 27 11:41:00.868: INFO: Deleting DaemonSet.extensions daemon-set took: 16.907272ms Dec 27 11:41:00.968: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.252122ms Dec 27 11:41:09.149: INFO: Number of nodes with available pods: 0 Dec 27 11:41:09.149: INFO: Number of running nodes: 0, number of available pods: 0 Dec 27 11:41:09.156: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-s6bvf/daemonsets","resourceVersion":"16231331"},"items":null} Dec 27 11:41:09.162: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-s6bvf/pods","resourceVersion":"16231331"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:41:09.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-s6bvf" for this suite. Dec 27 11:41:17.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:41:17.334: INFO: namespace: e2e-tests-daemonsets-s6bvf, resource: bindings, ignored listing per whitelist Dec 27 11:41:17.336: INFO: namespace e2e-tests-daemonsets-s6bvf deletion completed in 8.15778672s • [SLOW TEST:49.291 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:41:17.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1227 11:41:48.358375 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 27 11:41:48.358: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:41:48.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-29f56" for this suite. Dec 27 11:41:56.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:41:56.493: INFO: namespace: e2e-tests-gc-29f56, resource: bindings, ignored listing per whitelist Dec 27 11:41:56.869: INFO: namespace e2e-tests-gc-29f56 deletion completed in 8.496889663s • [SLOW TEST:39.533 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:41:56.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-e2f6481f-289d-11ea-bad5-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-e2f647e9-289d-11ea-bad5-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 27 11:41:57.261: INFO: Waiting up to 5m0s for pod "projected-volume-e2f64749-289d-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-5v2s2" to be "success or failure" Dec 27 11:41:57.315: INFO: Pod "projected-volume-e2f64749-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.193847ms Dec 27 11:42:00.348: INFO: Pod "projected-volume-e2f64749-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.086439241s Dec 27 11:42:02.362: INFO: Pod "projected-volume-e2f64749-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.100941087s Dec 27 11:42:04.370: INFO: Pod "projected-volume-e2f64749-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.108392544s Dec 27 11:42:06.602: INFO: Pod "projected-volume-e2f64749-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.340882511s Dec 27 11:42:08.675: INFO: Pod "projected-volume-e2f64749-289d-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.413839671s Dec 27 11:42:10.706: INFO: Pod "projected-volume-e2f64749-289d-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.44467351s STEP: Saw pod success Dec 27 11:42:10.706: INFO: Pod "projected-volume-e2f64749-289d-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:42:10.732: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-e2f64749-289d-11ea-bad5-0242ac110005 container projected-all-volume-test: STEP: delete the pod Dec 27 11:42:10.975: INFO: Waiting for pod projected-volume-e2f64749-289d-11ea-bad5-0242ac110005 to disappear Dec 27 11:42:10.995: INFO: Pod projected-volume-e2f64749-289d-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:42:10.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5v2s2" for this suite. Dec 27 11:42:20.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:42:20.783: INFO: namespace: e2e-tests-projected-5v2s2, resource: bindings, ignored listing per whitelist Dec 27 11:42:20.991: INFO: namespace e2e-tests-projected-5v2s2 deletion completed in 9.985267824s • [SLOW TEST:24.121 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:42:20.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-mpnfm [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-mpnfm STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-mpnfm Dec 27 11:42:21.657: INFO: Found 0 stateful pods, waiting for 1 Dec 27 11:42:31.798: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Dec 27 11:42:41.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 27 11:42:41.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mpnfm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 11:42:42.852: INFO: stderr: "" Dec 27 11:42:42.852: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 11:42:42.852: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 27 11:42:42.892: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 27 11:42:52.951: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 27 11:42:52.951: INFO: Waiting for statefulset status.replicas updated to 0 Dec 27 11:42:53.120: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999274s Dec 27 11:42:54.273: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988998414s Dec 27 11:42:56.979: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.83697517s Dec 27 11:42:57.995: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.130371374s Dec 27 11:42:59.012: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.115141725s Dec 27 11:43:00.026: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.097336147s Dec 27 11:43:01.032: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.08430203s Dec 27 11:43:02.630: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.077839784s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-mpnfm Dec 27 11:43:03.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mpnfm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 11:43:04.554: INFO: stderr: "" Dec 27 11:43:04.554: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 27 11:43:04.554: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 27 11:43:04.592: INFO: Found 1 stateful pods, waiting for 3 Dec 27 11:43:15.428: INFO: Found 2 stateful pods, waiting for 3 Dec 27 11:43:25.550: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 27 11:43:25.550: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 27 11:43:25.550: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 27 11:43:34.733: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 27 11:43:34.733: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 27 11:43:34.733: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 27 11:43:34.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mpnfm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 11:43:35.868: INFO: stderr: "" Dec 27 11:43:35.868: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 11:43:35.868: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 27 11:43:35.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mpnfm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 11:43:36.713: INFO: stderr: "" Dec 27 11:43:36.713: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 11:43:36.713: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 27 11:43:36.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mpnfm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 11:43:37.300: INFO: stderr: "" Dec 27 11:43:37.300: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 11:43:37.300: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 27 11:43:37.300: INFO: Waiting for statefulset status.replicas updated to 0 Dec 27 11:43:37.652: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Dec 27 11:43:47.674: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 27 11:43:47.674: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 27 11:43:47.674: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 27 11:43:48.015: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999589s Dec 27 11:43:49.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.685073258s Dec 27 11:43:50.110: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.59896042s Dec 27 11:43:51.126: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.589417777s Dec 27 11:43:52.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.573345557s Dec 27 11:43:53.147: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.559570766s Dec 27 11:43:54.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.552676218s Dec 27 11:43:55.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.543008254s Dec 27 11:43:56.209: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.501452404s Dec 27 11:43:57.855: INFO: Verifying statefulset ss doesn't scale past 3 for another 490.31101ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-mpnfm Dec 27 11:43:58.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mpnfm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 11:43:59.553: INFO: stderr: "" Dec 27 11:43:59.553: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 27 11:43:59.553: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 27 11:43:59.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mpnfm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 11:44:00.001: INFO: stderr: "" Dec 27 11:44:00.001: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 27 11:44:00.001: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 27 11:44:00.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mpnfm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 11:44:00.652: INFO: stderr: "" Dec 27 11:44:00.652: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 27 11:44:00.652: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 27 11:44:00.652: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 27 11:44:20.718: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mpnfm Dec 27 11:44:20.729: INFO: Scaling statefulset ss to 0 Dec 27 11:44:20.753: INFO: Waiting for statefulset status.replicas updated to 0 Dec 27 11:44:20.759: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:44:20.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-mpnfm" for this suite. Dec 27 11:44:26.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:44:26.953: INFO: namespace: e2e-tests-statefulset-mpnfm, resource: bindings, ignored listing per whitelist Dec 27 11:44:27.053: INFO: namespace e2e-tests-statefulset-mpnfm deletion completed in 6.223597926s • [SLOW TEST:126.062 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:44:27.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 27 11:44:27.490: INFO: Waiting up to 5m0s for pod "pod-3c890941-289e-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-64zpv" to be "success or failure" Dec 27 11:44:27.503: INFO: Pod "pod-3c890941-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.006263ms Dec 27 11:44:29.830: INFO: Pod "pod-3c890941-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339977492s Dec 27 11:44:31.839: INFO: Pod "pod-3c890941-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.349195651s Dec 27 11:44:33.967: INFO: Pod "pod-3c890941-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476447537s Dec 27 11:44:35.990: INFO: Pod "pod-3c890941-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500079506s Dec 27 11:44:38.038: INFO: Pod "pod-3c890941-289e-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.547748798s STEP: Saw pod success Dec 27 11:44:38.038: INFO: Pod "pod-3c890941-289e-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:44:38.050: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3c890941-289e-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 11:44:38.237: INFO: Waiting for pod pod-3c890941-289e-11ea-bad5-0242ac110005 to disappear Dec 27 11:44:38.295: INFO: Pod pod-3c890941-289e-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:44:38.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-64zpv" for this suite. Dec 27 11:44:46.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:44:46.650: INFO: namespace: e2e-tests-emptydir-64zpv, resource: bindings, ignored listing per whitelist Dec 27 11:44:46.698: INFO: namespace e2e-tests-emptydir-64zpv deletion completed in 8.322228485s • [SLOW TEST:19.645 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:44:46.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-tdfs5 Dec 27 11:44:57.138: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-tdfs5 STEP: checking the pod's current state and verifying that restartCount is present Dec 27 11:44:57.143: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:48:58.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-tdfs5" for this suite. Dec 27 11:49:05.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:49:05.836: INFO: namespace: e2e-tests-container-probe-tdfs5, resource: bindings, ignored listing per whitelist Dec 27 11:49:05.995: INFO: namespace e2e-tests-container-probe-tdfs5 deletion completed in 7.29100423s • [SLOW TEST:259.296 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:49:05.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e2e2344a-289e-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 11:49:06.568: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-spbvr" to be "success or failure" Dec 27 11:49:06.626: INFO: Pod "pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.420306ms Dec 27 11:49:08.659: INFO: Pod "pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090832049s Dec 27 11:49:10.683: INFO: Pod "pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114770378s Dec 27 11:49:12.700: INFO: Pod "pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131139702s Dec 27 11:49:14.721: INFO: Pod "pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15294809s Dec 27 11:49:16.736: INFO: Pod "pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.167115561s STEP: Saw pod success Dec 27 11:49:16.736: INFO: Pod "pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:49:16.739: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 27 11:49:16.889: INFO: Waiting for pod pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005 to disappear Dec 27 11:49:16.898: INFO: Pod pod-projected-configmaps-e2ed20d6-289e-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:49:16.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-spbvr" for this suite. Dec 27 11:49:22.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:49:23.047: INFO: namespace: e2e-tests-projected-spbvr, resource: bindings, ignored listing per whitelist Dec 27 11:49:23.064: INFO: namespace e2e-tests-projected-spbvr deletion completed in 6.156748487s • [SLOW TEST:17.067 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:49:23.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005 Dec 27 11:49:23.422: INFO: Pod name my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005: Found 0 pods out of 1 Dec 27 11:49:28.987: INFO: Pod name my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005: Found 1 pods out of 1 Dec 27 11:49:28.987: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005" are running Dec 27 11:49:33.010: INFO: Pod "my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005-6zxkf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-27 11:49:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-27 11:49:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-27 11:49:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-27 11:49:23 +0000 UTC Reason: Message:}]) Dec 27 11:49:33.010: INFO: Trying to dial the pod Dec 27 11:49:38.054: INFO: Controller my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005: Got expected result from replica 1 [my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005-6zxkf]: "my-hostname-basic-ecea979c-289e-11ea-bad5-0242ac110005-6zxkf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:49:38.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-cqrs2" for this suite. Dec 27 11:49:44.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:49:44.283: INFO: namespace: e2e-tests-replication-controller-cqrs2, resource: bindings, ignored listing per whitelist Dec 27 11:49:44.369: INFO: namespace e2e-tests-replication-controller-cqrs2 deletion completed in 6.304821576s • [SLOW TEST:21.305 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:49:44.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Dec 27 11:49:44.667: INFO: Waiting up to 5m0s for pod "pod-f9a2b8b6-289e-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-g2q6p" to be "success or failure" Dec 27 11:49:44.818: INFO: Pod "pod-f9a2b8b6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 150.485005ms Dec 27 11:49:46.828: INFO: Pod "pod-f9a2b8b6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161133955s Dec 27 11:49:48.839: INFO: Pod "pod-f9a2b8b6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171513795s Dec 27 11:49:50.861: INFO: Pod "pod-f9a2b8b6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193389405s Dec 27 11:49:52.885: INFO: Pod "pod-f9a2b8b6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218175614s Dec 27 11:49:54.924: INFO: Pod "pod-f9a2b8b6-289e-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.257161314s Dec 27 11:49:57.243: INFO: Pod "pod-f9a2b8b6-289e-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.575996936s STEP: Saw pod success Dec 27 11:49:57.243: INFO: Pod "pod-f9a2b8b6-289e-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:49:57.650: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f9a2b8b6-289e-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 11:49:57.752: INFO: Waiting for pod pod-f9a2b8b6-289e-11ea-bad5-0242ac110005 to disappear Dec 27 11:49:57.766: INFO: Pod pod-f9a2b8b6-289e-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:49:57.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-g2q6p" for this suite. Dec 27 11:50:03.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:50:03.974: INFO: namespace: e2e-tests-emptydir-g2q6p, resource: bindings, ignored listing per whitelist Dec 27 11:50:03.974: INFO: namespace e2e-tests-emptydir-g2q6p deletion completed in 6.200267432s • [SLOW TEST:19.604 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:50:03.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-05549731-289f-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 11:50:04.278: INFO: Waiting up to 5m0s for pod "pod-configmaps-05556930-289f-11ea-bad5-0242ac110005" in namespace "e2e-tests-configmap-vzvrf" to be "success or failure" Dec 27 11:50:04.377: INFO: Pod "pod-configmaps-05556930-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.17942ms Dec 27 11:50:06.965: INFO: Pod "pod-configmaps-05556930-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.687207789s Dec 27 11:50:08.999: INFO: Pod "pod-configmaps-05556930-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.721370188s Dec 27 11:50:11.011: INFO: Pod "pod-configmaps-05556930-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.733892131s Dec 27 11:50:13.024: INFO: Pod "pod-configmaps-05556930-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.74671804s Dec 27 11:50:15.041: INFO: Pod "pod-configmaps-05556930-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.763681369s Dec 27 11:50:17.074: INFO: Pod "pod-configmaps-05556930-289f-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.796346837s STEP: Saw pod success Dec 27 11:50:17.074: INFO: Pod "pod-configmaps-05556930-289f-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:50:17.089: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-05556930-289f-11ea-bad5-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 27 11:50:17.741: INFO: Waiting for pod pod-configmaps-05556930-289f-11ea-bad5-0242ac110005 to disappear Dec 27 11:50:17.962: INFO: Pod pod-configmaps-05556930-289f-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:50:17.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vzvrf" for this suite. Dec 27 11:50:24.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:50:24.306: INFO: namespace: e2e-tests-configmap-vzvrf, resource: bindings, ignored listing per whitelist Dec 27 11:50:24.341: INFO: namespace e2e-tests-configmap-vzvrf deletion completed in 6.364353935s • [SLOW TEST:20.366 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:50:24.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Dec 27 11:50:24.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 27 11:50:24.911: INFO: stderr: "" Dec 27 11:50:24.911: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:50:24.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p7gpx" for this suite. Dec 27 11:50:32.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:50:33.140: INFO: namespace: e2e-tests-kubectl-p7gpx, resource: bindings, ignored listing per whitelist Dec 27 11:50:33.149: INFO: namespace e2e-tests-kubectl-p7gpx deletion completed in 8.225314637s • [SLOW TEST:8.807 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:50:33.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-16aad21e-289f-11ea-bad5-0242ac110005 STEP: Creating secret with name s-test-opt-upd-16aad34d-289f-11ea-bad5-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-16aad21e-289f-11ea-bad5-0242ac110005 STEP: Updating secret s-test-opt-upd-16aad34d-289f-11ea-bad5-0242ac110005 STEP: Creating secret with name s-test-opt-create-16aad3e2-289f-11ea-bad5-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:52:12.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zvd6z" for this suite. Dec 27 11:52:36.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:52:36.315: INFO: namespace: e2e-tests-secrets-zvd6z, resource: bindings, ignored listing per whitelist Dec 27 11:52:36.332: INFO: namespace e2e-tests-secrets-zvd6z deletion completed in 24.28841511s • [SLOW TEST:123.182 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:52:36.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 11:52:46.905: INFO: Waiting up to 5m0s for pod "client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005" in namespace "e2e-tests-pods-xdgxx" to be "success or failure" Dec 27 11:52:46.927: INFO: Pod "client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.089698ms Dec 27 11:52:48.951: INFO: Pod "client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045463279s Dec 27 11:52:50.964: INFO: Pod "client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058971164s Dec 27 11:52:52.979: INFO: Pod "client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073780982s Dec 27 11:52:55.021: INFO: Pod "client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116054984s Dec 27 11:52:57.031: INFO: Pod "client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125247094s STEP: Saw pod success Dec 27 11:52:57.031: INFO: Pod "client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 11:52:57.035: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005 container env3cont: STEP: delete the pod Dec 27 11:52:57.547: INFO: Waiting for pod client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005 to disappear Dec 27 11:52:57.777: INFO: Pod client-envvars-6641ec3f-289f-11ea-bad5-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:52:57.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-xdgxx" for this suite. Dec 27 11:53:40.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:53:40.160: INFO: namespace: e2e-tests-pods-xdgxx, resource: bindings, ignored listing per whitelist Dec 27 11:53:40.240: INFO: namespace e2e-tests-pods-xdgxx deletion completed in 42.425458123s • [SLOW TEST:63.908 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:53:40.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 27 11:53:51.198: INFO: Successfully updated pod "labelsupdate8642afe8-289f-11ea-bad5-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:53:53.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cwkrm" for this suite. Dec 27 11:54:17.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:54:17.636: INFO: namespace: e2e-tests-downward-api-cwkrm, resource: bindings, ignored listing per whitelist Dec 27 11:54:17.763: INFO: namespace e2e-tests-downward-api-cwkrm deletion completed in 24.25704445s • [SLOW TEST:37.523 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:54:17.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Dec 27 11:54:28.182: INFO: Pod pod-hostip-9c8e6118-289f-11ea-bad5-0242ac110005 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:54:28.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mlktp" for this suite. Dec 27 11:54:52.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:54:52.510: INFO: namespace: e2e-tests-pods-mlktp, resource: bindings, ignored listing per whitelist Dec 27 11:54:52.511: INFO: namespace e2e-tests-pods-mlktp deletion completed in 24.31999033s • [SLOW TEST:34.746 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:54:52.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 27 11:54:52.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-bkwrj' Dec 27 11:54:55.079: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 27 11:54:55.080: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Dec 27 11:54:57.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-bkwrj' Dec 27 11:54:57.659: INFO: stderr: "" Dec 27 11:54:57.659: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:54:57.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bkwrj" for this suite. Dec 27 11:55:03.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:55:04.128: INFO: namespace: e2e-tests-kubectl-bkwrj, resource: bindings, ignored listing per whitelist Dec 27 11:55:04.196: INFO: namespace e2e-tests-kubectl-bkwrj deletion completed in 6.511501046s • [SLOW TEST:11.685 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:55:04.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-8s5sn.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-8s5sn.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8s5sn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-8s5sn.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-8s5sn.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8s5sn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 27 11:55:20.840: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.842: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.853: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.865: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.869: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.873: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.883: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-8s5sn.svc.cluster.local from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.892: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.895: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.900: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-b83fb475-289f-11ea-bad5-0242ac110005) Dec 27 11:55:20.900: INFO: Lookups using e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-8s5sn.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 27 11:55:26.686: INFO: DNS probes using e2e-tests-dns-8s5sn/dns-test-b83fb475-289f-11ea-bad5-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:55:26.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-8s5sn" for this suite. Dec 27 11:55:32.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:55:33.022: INFO: namespace: e2e-tests-dns-8s5sn, resource: bindings, ignored listing per whitelist Dec 27 11:55:33.068: INFO: namespace e2e-tests-dns-8s5sn deletion completed in 6.152928013s • [SLOW TEST:28.871 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:55:33.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 27 11:55:33.300: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-n8bt2,SelfLink:/api/v1/namespaces/e2e-tests-watch-n8bt2/configmaps/e2e-watch-test-watch-closed,UID:c9728239-289f-11ea-a994-fa163e34d433,ResourceVersion:16232998,Generation:0,CreationTimestamp:2019-12-27 11:55:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 27 11:55:33.300: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-n8bt2,SelfLink:/api/v1/namespaces/e2e-tests-watch-n8bt2/configmaps/e2e-watch-test-watch-closed,UID:c9728239-289f-11ea-a994-fa163e34d433,ResourceVersion:16232999,Generation:0,CreationTimestamp:2019-12-27 11:55:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 27 11:55:33.454: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-n8bt2,SelfLink:/api/v1/namespaces/e2e-tests-watch-n8bt2/configmaps/e2e-watch-test-watch-closed,UID:c9728239-289f-11ea-a994-fa163e34d433,ResourceVersion:16233000,Generation:0,CreationTimestamp:2019-12-27 11:55:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 27 11:55:33.455: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-n8bt2,SelfLink:/api/v1/namespaces/e2e-tests-watch-n8bt2/configmaps/e2e-watch-test-watch-closed,UID:c9728239-289f-11ea-a994-fa163e34d433,ResourceVersion:16233001,Generation:0,CreationTimestamp:2019-12-27 11:55:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:55:33.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-n8bt2" for this suite. Dec 27 11:55:39.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:55:39.555: INFO: namespace: e2e-tests-watch-n8bt2, resource: bindings, ignored listing per whitelist Dec 27 11:55:39.693: INFO: namespace e2e-tests-watch-n8bt2 deletion completed in 6.221814645s • [SLOW TEST:6.625 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:55:39.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:55:46.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-2knmx" for this suite. Dec 27 11:55:52.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:55:52.895: INFO: namespace: e2e-tests-namespaces-2knmx, resource: bindings, ignored listing per whitelist Dec 27 11:55:53.073: INFO: namespace e2e-tests-namespaces-2knmx deletion completed in 6.431544475s STEP: Destroying namespace "e2e-tests-nsdeletetest-c4qjl" for this suite. Dec 27 11:55:53.077: INFO: Namespace e2e-tests-nsdeletetest-c4qjl was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-2rvbp" for this suite. Dec 27 11:55:59.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:55:59.277: INFO: namespace: e2e-tests-nsdeletetest-2rvbp, resource: bindings, ignored listing per whitelist Dec 27 11:55:59.331: INFO: namespace e2e-tests-nsdeletetest-2rvbp deletion completed in 6.253743892s • [SLOW TEST:19.637 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:55:59.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 11:55:59.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4925p" for this suite. Dec 27 11:56:23.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 11:56:24.056: INFO: namespace: e2e-tests-pods-4925p, resource: bindings, ignored listing per whitelist Dec 27 11:56:24.120: INFO: namespace e2e-tests-pods-4925p deletion completed in 24.274681426s • [SLOW TEST:24.789 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 11:56:24.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-nphpv Dec 27 11:56:36.350: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-nphpv STEP: checking the pod's current state and verifying that restartCount is present Dec 27 11:56:36.364: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:00:37.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nphpv" for this suite. Dec 27 12:00:45.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:00:45.373: INFO: namespace: e2e-tests-container-probe-nphpv, resource: bindings, ignored listing per whitelist Dec 27 12:00:45.642: INFO: namespace e2e-tests-container-probe-nphpv deletion completed in 8.370721508s • [SLOW TEST:261.523 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:00:45.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-83c2c578-28a0-11ea-bad5-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-83c2c578-28a0-11ea-bad5-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:02:05.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n4n49" for this suite. Dec 27 12:02:29.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:02:29.931: INFO: namespace: e2e-tests-projected-n4n49, resource: bindings, ignored listing per whitelist Dec 27 12:02:30.223: INFO: namespace e2e-tests-projected-n4n49 deletion completed in 24.402076006s • [SLOW TEST:104.580 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:02:30.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c2376c3f-28a0-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 12:02:30.679: INFO: Waiting up to 5m0s for pod "pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005" in namespace "e2e-tests-configmap-ph2gv" to be "success or failure" Dec 27 12:02:30.693: INFO: Pod "pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.32358ms Dec 27 12:02:32.807: INFO: Pod "pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128653496s Dec 27 12:02:34.828: INFO: Pod "pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149433579s Dec 27 12:02:37.003: INFO: Pod "pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323980317s Dec 27 12:02:39.047: INFO: Pod "pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.368484123s Dec 27 12:02:41.525: INFO: Pod "pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.8463285s STEP: Saw pod success Dec 27 12:02:41.525: INFO: Pod "pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:02:41.949: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 27 12:02:42.141: INFO: Waiting for pod pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005 to disappear Dec 27 12:02:42.175: INFO: Pod pod-configmaps-c239dcc2-28a0-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:02:42.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ph2gv" for this suite. Dec 27 12:02:48.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:02:48.428: INFO: namespace: e2e-tests-configmap-ph2gv, resource: bindings, ignored listing per whitelist Dec 27 12:02:48.566: INFO: namespace e2e-tests-configmap-ph2gv deletion completed in 6.378059946s • [SLOW TEST:18.342 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:02:48.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-cd1db080-28a0-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 12:02:49.001: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-2kmgt" to be "success or failure" Dec 27 12:02:49.012: INFO: Pod "pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.476714ms Dec 27 12:02:51.026: INFO: Pod "pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024491446s Dec 27 12:02:53.039: INFO: Pod "pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037459345s Dec 27 12:02:55.058: INFO: Pod "pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05677983s Dec 27 12:02:57.068: INFO: Pod "pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066147991s Dec 27 12:02:59.081: INFO: Pod "pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079317745s STEP: Saw pod success Dec 27 12:02:59.081: INFO: Pod "pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:02:59.088: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 27 12:02:59.725: INFO: Waiting for pod pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005 to disappear Dec 27 12:03:00.071: INFO: Pod pod-projected-configmaps-cd1fb191-28a0-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:03:00.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2kmgt" for this suite. Dec 27 12:03:06.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:03:06.416: INFO: namespace: e2e-tests-projected-2kmgt, resource: bindings, ignored listing per whitelist Dec 27 12:03:06.425: INFO: namespace e2e-tests-projected-2kmgt deletion completed in 6.338122257s • [SLOW TEST:17.858 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:03:06.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d7d03fe3-28a0-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 12:03:06.918: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005" in namespace "e2e-tests-configmap-8b7lm" to be "success or failure" Dec 27 12:03:06.960: INFO: Pod "pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.972981ms Dec 27 12:03:09.212: INFO: Pod "pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294006603s Dec 27 12:03:11.228: INFO: Pod "pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309632992s Dec 27 12:03:13.578: INFO: Pod "pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.660342612s Dec 27 12:03:15.594: INFO: Pod "pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.676405006s Dec 27 12:03:17.609: INFO: Pod "pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.691512881s STEP: Saw pod success Dec 27 12:03:17.610: INFO: Pod "pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:03:17.614: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 27 12:03:18.509: INFO: Waiting for pod pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005 to disappear Dec 27 12:03:18.784: INFO: Pod pod-configmaps-d7d1f1ef-28a0-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:03:18.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8b7lm" for this suite. Dec 27 12:03:24.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:03:24.935: INFO: namespace: e2e-tests-configmap-8b7lm, resource: bindings, ignored listing per whitelist Dec 27 12:03:25.029: INFO: namespace e2e-tests-configmap-8b7lm deletion completed in 6.233235542s • [SLOW TEST:18.603 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:03:25.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 27 12:03:33.234: INFO: 8 pods remaining Dec 27 12:03:33.234: INFO: 0 pods has nil DeletionTimestamp Dec 27 12:03:33.234: INFO: STEP: Gathering metrics W1227 12:03:33.988388 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 27 12:03:33.988: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:03:33.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-sp68d" for this suite. Dec 27 12:03:48.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:03:48.186: INFO: namespace: e2e-tests-gc-sp68d, resource: bindings, ignored listing per whitelist Dec 27 12:03:48.261: INFO: namespace e2e-tests-gc-sp68d deletion completed in 14.215707822s • [SLOW TEST:23.232 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:03:48.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-q8rf STEP: Creating a pod to test atomic-volume-subpath Dec 27 12:03:48.702: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q8rf" in namespace "e2e-tests-subpath-7zn8s" to be "success or failure" Dec 27 12:03:48.742: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Pending", Reason="", readiness=false. Elapsed: 39.655235ms Dec 27 12:03:50.757: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054644676s Dec 27 12:03:52.801: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099456199s Dec 27 12:03:54.812: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110069444s Dec 27 12:03:56.829: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12686011s Dec 27 12:03:58.850: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.147922632s Dec 27 12:04:01.101: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.399263631s Dec 27 12:04:03.144: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.441826579s Dec 27 12:04:05.179: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Running", Reason="", readiness=false. Elapsed: 16.476873881s Dec 27 12:04:07.195: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Running", Reason="", readiness=false. Elapsed: 18.492992605s Dec 27 12:04:09.209: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Running", Reason="", readiness=false. Elapsed: 20.507220955s Dec 27 12:04:11.222: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Running", Reason="", readiness=false. Elapsed: 22.519608189s Dec 27 12:04:13.244: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Running", Reason="", readiness=false. Elapsed: 24.542294233s Dec 27 12:04:15.260: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Running", Reason="", readiness=false. Elapsed: 26.558220728s Dec 27 12:04:17.277: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Running", Reason="", readiness=false. Elapsed: 28.574965527s Dec 27 12:04:19.291: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Running", Reason="", readiness=false. Elapsed: 30.588645889s Dec 27 12:04:21.656: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Running", Reason="", readiness=false. Elapsed: 32.953810003s Dec 27 12:04:23.668: INFO: Pod "pod-subpath-test-configmap-q8rf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.966148635s STEP: Saw pod success Dec 27 12:04:23.668: INFO: Pod "pod-subpath-test-configmap-q8rf" satisfied condition "success or failure" Dec 27 12:04:23.679: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-q8rf container test-container-subpath-configmap-q8rf: STEP: delete the pod Dec 27 12:04:24.694: INFO: Waiting for pod pod-subpath-test-configmap-q8rf to disappear Dec 27 12:04:24.716: INFO: Pod pod-subpath-test-configmap-q8rf no longer exists STEP: Deleting pod pod-subpath-test-configmap-q8rf Dec 27 12:04:24.716: INFO: Deleting pod "pod-subpath-test-configmap-q8rf" in namespace "e2e-tests-subpath-7zn8s" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:04:24.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-7zn8s" for this suite. Dec 27 12:04:30.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:04:31.002: INFO: namespace: e2e-tests-subpath-7zn8s, resource: bindings, ignored listing per whitelist Dec 27 12:04:31.065: INFO: namespace e2e-tests-subpath-7zn8s deletion completed in 6.311585426s • [SLOW TEST:42.804 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:04:31.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Dec 27 12:04:31.379: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix658735342/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:04:31.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-85xnd" for this suite. Dec 27 12:04:37.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:04:37.685: INFO: namespace: e2e-tests-kubectl-85xnd, resource: bindings, ignored listing per whitelist Dec 27 12:04:37.729: INFO: namespace e2e-tests-kubectl-85xnd deletion completed in 6.221387565s • [SLOW TEST:6.664 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:04:37.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 27 12:05:00.065: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:00.151: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:02.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:02.170: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:04.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:04.165: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:06.152: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:06.179: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:08.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:08.166: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:10.152: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:10.172: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:12.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:12.165: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:14.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:14.187: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:16.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:16.164: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:18.152: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:18.473: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:20.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:20.174: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:22.152: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:22.174: INFO: Pod pod-with-prestop-exec-hook still exists Dec 27 12:05:24.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 27 12:05:24.167: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:05:24.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-w7ggl" for this suite. Dec 27 12:05:48.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:05:48.354: INFO: namespace: e2e-tests-container-lifecycle-hook-w7ggl, resource: bindings, ignored listing per whitelist Dec 27 12:05:48.622: INFO: namespace e2e-tests-container-lifecycle-hook-w7ggl deletion completed in 24.419698501s • [SLOW TEST:70.892 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:05:48.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-38551bc5-28a1-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 12:05:48.864: INFO: Waiting up to 5m0s for pod "pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005" in namespace "e2e-tests-configmap-47tnf" to be "success or failure" Dec 27 12:05:48.977: INFO: Pod "pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 113.219386ms Dec 27 12:05:50.993: INFO: Pod "pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129648507s Dec 27 12:05:53.004: INFO: Pod "pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140496126s Dec 27 12:05:55.243: INFO: Pod "pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379451784s Dec 27 12:05:57.249: INFO: Pod "pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.385370444s Dec 27 12:05:59.273: INFO: Pod "pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.409273677s STEP: Saw pod success Dec 27 12:05:59.273: INFO: Pod "pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:05:59.297: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 27 12:05:59.545: INFO: Waiting for pod pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005 to disappear Dec 27 12:05:59.551: INFO: Pod pod-configmaps-3857e5a5-28a1-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:05:59.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-47tnf" for this suite. Dec 27 12:06:05.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:06:05.657: INFO: namespace: e2e-tests-configmap-47tnf, resource: bindings, ignored listing per whitelist Dec 27 12:06:05.743: INFO: namespace e2e-tests-configmap-47tnf deletion completed in 6.186450975s • [SLOW TEST:17.120 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:06:05.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-42c7c668-28a1-11ea-bad5-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-42c7c6d6-28a1-11ea-bad5-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-42c7c668-28a1-11ea-bad5-0242ac110005 STEP: Updating configmap cm-test-opt-upd-42c7c6d6-28a1-11ea-bad5-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-42c7c727-28a1-11ea-bad5-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:07:49.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4nnqt" for this suite. Dec 27 12:08:13.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:08:13.740: INFO: namespace: e2e-tests-configmap-4nnqt, resource: bindings, ignored listing per whitelist Dec 27 12:08:13.804: INFO: namespace e2e-tests-configmap-4nnqt deletion completed in 24.344134106s • [SLOW TEST:128.060 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:08:13.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Dec 27 12:08:14.124: INFO: Waiting up to 5m0s for pod "client-containers-8eef390b-28a1-11ea-bad5-0242ac110005" in namespace "e2e-tests-containers-dvrwr" to be "success or failure" Dec 27 12:08:14.137: INFO: Pod "client-containers-8eef390b-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.422832ms Dec 27 12:08:16.673: INFO: Pod "client-containers-8eef390b-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.549687874s Dec 27 12:08:18.697: INFO: Pod "client-containers-8eef390b-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573076303s Dec 27 12:08:20.790: INFO: Pod "client-containers-8eef390b-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.66615294s Dec 27 12:08:22.812: INFO: Pod "client-containers-8eef390b-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.688496591s Dec 27 12:08:24.843: INFO: Pod "client-containers-8eef390b-28a1-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.719250926s STEP: Saw pod success Dec 27 12:08:24.843: INFO: Pod "client-containers-8eef390b-28a1-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:08:24.855: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-8eef390b-28a1-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 12:08:25.016: INFO: Waiting for pod client-containers-8eef390b-28a1-11ea-bad5-0242ac110005 to disappear Dec 27 12:08:25.114: INFO: Pod client-containers-8eef390b-28a1-11ea-bad5-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:08:25.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-dvrwr" for this suite. Dec 27 12:08:31.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:08:31.238: INFO: namespace: e2e-tests-containers-dvrwr, resource: bindings, ignored listing per whitelist Dec 27 12:08:31.371: INFO: namespace e2e-tests-containers-dvrwr deletion completed in 6.247064775s • [SLOW TEST:17.567 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:08:31.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 27 12:08:31.713: INFO: Number of nodes with available pods: 0 Dec 27 12:08:31.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 12:08:33.106: INFO: Number of nodes with available pods: 0 Dec 27 12:08:33.106: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 12:08:33.735: INFO: Number of nodes with available pods: 0 Dec 27 12:08:33.735: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 12:08:34.729: INFO: Number of nodes with available pods: 0 Dec 27 12:08:34.729: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 12:08:35.749: INFO: Number of nodes with available pods: 0 Dec 27 12:08:35.750: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 12:08:36.733: INFO: Number of nodes with available pods: 0 Dec 27 12:08:36.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 12:08:37.885: INFO: Number of nodes with available pods: 0 Dec 27 12:08:37.885: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 12:08:38.819: INFO: Number of nodes with available pods: 0 Dec 27 12:08:38.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 12:08:39.795: INFO: Number of nodes with available pods: 0 Dec 27 12:08:39.795: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 27 12:08:40.782: INFO: Number of nodes with available pods: 1 Dec 27 12:08:40.782: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 27 12:08:40.975: INFO: Number of nodes with available pods: 1 Dec 27 12:08:40.975: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-p27br, will wait for the garbage collector to delete the pods Dec 27 12:08:43.026: INFO: Deleting DaemonSet.extensions daemon-set took: 323.431744ms Dec 27 12:08:43.126: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.339907ms Dec 27 12:08:48.449: INFO: Number of nodes with available pods: 0 Dec 27 12:08:48.449: INFO: Number of running nodes: 0, number of available pods: 0 Dec 27 12:08:48.457: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p27br/daemonsets","resourceVersion":"16234443"},"items":null} Dec 27 12:08:48.462: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p27br/pods","resourceVersion":"16234443"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:08:48.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-p27br" for this suite. Dec 27 12:08:54.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:08:55.041: INFO: namespace: e2e-tests-daemonsets-p27br, resource: bindings, ignored listing per whitelist Dec 27 12:08:55.044: INFO: namespace e2e-tests-daemonsets-p27br deletion completed in 6.39775825s • [SLOW TEST:23.673 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:08:55.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-a76f9eaf-28a1-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 12:08:55.239: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-2hxlb" to be "success or failure" Dec 27 12:08:55.258: INFO: Pod "pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.935293ms Dec 27 12:08:57.294: INFO: Pod "pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054563717s Dec 27 12:08:59.311: INFO: Pod "pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071496632s Dec 27 12:09:01.326: INFO: Pod "pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087055961s Dec 27 12:09:03.361: INFO: Pod "pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12150396s Dec 27 12:09:05.375: INFO: Pod "pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.135500916s STEP: Saw pod success Dec 27 12:09:05.375: INFO: Pod "pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:09:05.383: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 27 12:09:05.501: INFO: Waiting for pod pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005 to disappear Dec 27 12:09:05.506: INFO: Pod pod-projected-configmaps-a770cc96-28a1-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:09:05.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2hxlb" for this suite. Dec 27 12:09:11.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:09:11.904: INFO: namespace: e2e-tests-projected-2hxlb, resource: bindings, ignored listing per whitelist Dec 27 12:09:11.985: INFO: namespace e2e-tests-projected-2hxlb deletion completed in 6.46438016s • [SLOW TEST:16.940 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:09:11.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-b188b5df-28a1-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 12:09:12.190: INFO: Waiting up to 5m0s for pod "pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005" in namespace "e2e-tests-secrets-6xnsx" to be "success or failure" Dec 27 12:09:12.201: INFO: Pod "pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.981973ms Dec 27 12:09:14.326: INFO: Pod "pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136498496s Dec 27 12:09:16.898: INFO: Pod "pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.708143136s Dec 27 12:09:18.916: INFO: Pod "pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.726547642s Dec 27 12:09:20.935: INFO: Pod "pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.744766836s Dec 27 12:09:23.590: INFO: Pod "pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.399665691s STEP: Saw pod success Dec 27 12:09:23.590: INFO: Pod "pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:09:23.615: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005 container secret-env-test: STEP: delete the pod Dec 27 12:09:24.010: INFO: Waiting for pod pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005 to disappear Dec 27 12:09:24.020: INFO: Pod pod-secrets-b1898b39-28a1-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:09:24.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6xnsx" for this suite. Dec 27 12:09:30.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:09:30.158: INFO: namespace: e2e-tests-secrets-6xnsx, resource: bindings, ignored listing per whitelist Dec 27 12:09:30.384: INFO: namespace e2e-tests-secrets-6xnsx deletion completed in 6.29553055s • [SLOW TEST:18.399 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:09:30.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 27 12:09:41.483: INFO: Successfully updated pod "pod-update-bc9625cd-28a1-11ea-bad5-0242ac110005" STEP: verifying the updated pod is in kubernetes Dec 27 12:09:41.508: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:09:41.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-2d5g4" for this suite. Dec 27 12:10:05.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:10:05.698: INFO: namespace: e2e-tests-pods-2d5g4, resource: bindings, ignored listing per whitelist Dec 27 12:10:05.723: INFO: namespace e2e-tests-pods-2d5g4 deletion completed in 24.203247298s • [SLOW TEST:35.339 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:10:05.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:10:06.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xhbls" for this suite. Dec 27 12:10:12.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:10:12.582: INFO: namespace: e2e-tests-kubelet-test-xhbls, resource: bindings, ignored listing per whitelist Dec 27 12:10:12.996: INFO: namespace e2e-tests-kubelet-test-xhbls deletion completed in 6.796600741s • [SLOW TEST:7.272 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:10:12.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-mqcg6 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-mqcg6 STEP: Deleting pre-stop pod Dec 27 12:10:38.466: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:10:38.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-mqcg6" for this suite. Dec 27 12:11:18.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:11:18.781: INFO: namespace: e2e-tests-prestop-mqcg6, resource: bindings, ignored listing per whitelist Dec 27 12:11:18.906: INFO: namespace e2e-tests-prestop-mqcg6 deletion completed in 40.326141006s • [SLOW TEST:65.910 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:11:18.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-x2ltv I1227 12:11:19.081929 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-x2ltv, replica count: 1 I1227 12:11:20.132542 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1227 12:11:21.132869 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1227 12:11:22.133344 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1227 12:11:23.133730 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1227 12:11:24.134085 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1227 12:11:25.134452 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1227 12:11:26.134702 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1227 12:11:27.134944 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1227 12:11:28.135202 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1227 12:11:29.135497 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 27 12:11:29.293: INFO: Created: latency-svc-4r58l Dec 27 12:11:29.342: INFO: Got endpoints: latency-svc-4r58l [107.18551ms] Dec 27 12:11:29.590: INFO: Created: latency-svc-kzkbh Dec 27 12:11:29.608: INFO: Got endpoints: latency-svc-kzkbh [263.668527ms] Dec 27 12:11:29.689: INFO: Created: latency-svc-mjw66 Dec 27 12:11:29.707: INFO: Got endpoints: latency-svc-mjw66 [363.211015ms] Dec 27 12:11:29.767: INFO: Created: latency-svc-l7445 Dec 27 12:11:29.915: INFO: Got endpoints: latency-svc-l7445 [570.706447ms] Dec 27 12:11:29.938: INFO: Created: latency-svc-67mks Dec 27 12:11:30.116: INFO: Created: latency-svc-gfbgb Dec 27 12:11:30.127: INFO: Got endpoints: latency-svc-67mks [783.963979ms] Dec 27 12:11:30.154: INFO: Got endpoints: latency-svc-gfbgb [809.656315ms] Dec 27 12:11:30.342: INFO: Created: latency-svc-4jt4k Dec 27 12:11:30.365: INFO: Got endpoints: latency-svc-4jt4k [1.021711239s] Dec 27 12:11:30.584: INFO: Created: latency-svc-7ftdb Dec 27 12:11:30.606: INFO: Got endpoints: latency-svc-7ftdb [1.261359003s] Dec 27 12:11:30.668: INFO: Created: latency-svc-l5525 Dec 27 12:11:30.772: INFO: Got endpoints: latency-svc-l5525 [1.427623714s] Dec 27 12:11:30.802: INFO: Created: latency-svc-gx7dq Dec 27 12:11:30.811: INFO: Got endpoints: latency-svc-gx7dq [1.466868943s] Dec 27 12:11:31.025: INFO: Created: latency-svc-7dwq4 Dec 27 12:11:31.062: INFO: Got endpoints: latency-svc-7dwq4 [1.717962143s] Dec 27 12:11:31.099: INFO: Created: latency-svc-rch9t Dec 27 12:11:31.212: INFO: Got endpoints: latency-svc-rch9t [1.868143754s] Dec 27 12:11:31.269: INFO: Created: latency-svc-5rdpv Dec 27 12:11:31.282: INFO: Got endpoints: latency-svc-5rdpv [1.937827721s] Dec 27 12:11:31.423: INFO: Created: latency-svc-vz8fr Dec 27 12:11:31.426: INFO: Got endpoints: latency-svc-vz8fr [2.083069909s] Dec 27 12:11:31.466: INFO: Created: latency-svc-dgnkk Dec 27 12:11:31.673: INFO: Got endpoints: latency-svc-dgnkk [2.329068544s] Dec 27 12:11:31.698: INFO: Created: latency-svc-lrgc8 Dec 27 12:11:31.728: INFO: Got endpoints: latency-svc-lrgc8 [2.38456687s] Dec 27 12:11:31.945: INFO: Created: latency-svc-w9lxt Dec 27 12:11:31.953: INFO: Got endpoints: latency-svc-w9lxt [2.345738611s] Dec 27 12:11:32.231: INFO: Created: latency-svc-8dqj7 Dec 27 12:11:32.378: INFO: Got endpoints: latency-svc-8dqj7 [2.671159418s] Dec 27 12:11:32.385: INFO: Created: latency-svc-rbr5n Dec 27 12:11:32.408: INFO: Got endpoints: latency-svc-rbr5n [2.492854902s] Dec 27 12:11:32.494: INFO: Created: latency-svc-kjmxp Dec 27 12:11:32.632: INFO: Got endpoints: latency-svc-kjmxp [2.50541159s] Dec 27 12:11:32.680: INFO: Created: latency-svc-rg7ps Dec 27 12:11:32.719: INFO: Got endpoints: latency-svc-rg7ps [2.564688626s] Dec 27 12:11:32.892: INFO: Created: latency-svc-lqbzh Dec 27 12:11:33.192: INFO: Got endpoints: latency-svc-lqbzh [2.826952184s] Dec 27 12:11:33.208: INFO: Created: latency-svc-zjpn5 Dec 27 12:11:33.220: INFO: Got endpoints: latency-svc-zjpn5 [2.614473946s] Dec 27 12:11:33.384: INFO: Created: latency-svc-sxwzm Dec 27 12:11:33.410: INFO: Got endpoints: latency-svc-sxwzm [2.638342965s] Dec 27 12:11:33.614: INFO: Created: latency-svc-l5kb2 Dec 27 12:11:33.647: INFO: Got endpoints: latency-svc-l5kb2 [2.835408075s] Dec 27 12:11:33.855: INFO: Created: latency-svc-cl4nm Dec 27 12:11:33.873: INFO: Got endpoints: latency-svc-cl4nm [2.810980185s] Dec 27 12:11:34.035: INFO: Created: latency-svc-74pqv Dec 27 12:11:34.260: INFO: Got endpoints: latency-svc-74pqv [3.048165486s] Dec 27 12:11:34.275: INFO: Created: latency-svc-h9krh Dec 27 12:11:34.313: INFO: Got endpoints: latency-svc-h9krh [3.03097035s] Dec 27 12:11:34.495: INFO: Created: latency-svc-5xxkn Dec 27 12:11:34.537: INFO: Got endpoints: latency-svc-5xxkn [3.111249847s] Dec 27 12:11:34.673: INFO: Created: latency-svc-gzhpp Dec 27 12:11:34.686: INFO: Got endpoints: latency-svc-gzhpp [3.013354371s] Dec 27 12:11:34.749: INFO: Created: latency-svc-56psk Dec 27 12:11:34.754: INFO: Got endpoints: latency-svc-56psk [3.026424186s] Dec 27 12:11:34.920: INFO: Created: latency-svc-4mw7h Dec 27 12:11:34.953: INFO: Got endpoints: latency-svc-4mw7h [2.99964616s] Dec 27 12:11:35.152: INFO: Created: latency-svc-8v2vs Dec 27 12:11:35.193: INFO: Got endpoints: latency-svc-8v2vs [2.814930039s] Dec 27 12:11:35.426: INFO: Created: latency-svc-sgbq7 Dec 27 12:11:35.461: INFO: Got endpoints: latency-svc-sgbq7 [3.053065768s] Dec 27 12:11:35.592: INFO: Created: latency-svc-vg2w8 Dec 27 12:11:35.603: INFO: Got endpoints: latency-svc-vg2w8 [2.970749142s] Dec 27 12:11:35.653: INFO: Created: latency-svc-d7pw7 Dec 27 12:11:35.876: INFO: Got endpoints: latency-svc-d7pw7 [3.157316065s] Dec 27 12:11:35.937: INFO: Created: latency-svc-vhs48 Dec 27 12:11:36.061: INFO: Got endpoints: latency-svc-vhs48 [2.869032879s] Dec 27 12:11:36.085: INFO: Created: latency-svc-nf5gq Dec 27 12:11:36.094: INFO: Got endpoints: latency-svc-nf5gq [2.873387307s] Dec 27 12:11:36.255: INFO: Created: latency-svc-m4vsz Dec 27 12:11:36.285: INFO: Got endpoints: latency-svc-m4vsz [2.874343109s] Dec 27 12:11:36.329: INFO: Created: latency-svc-n42tq Dec 27 12:11:36.441: INFO: Got endpoints: latency-svc-n42tq [2.793896491s] Dec 27 12:11:36.476: INFO: Created: latency-svc-89hvl Dec 27 12:11:36.501: INFO: Got endpoints: latency-svc-89hvl [2.627850215s] Dec 27 12:11:36.829: INFO: Created: latency-svc-zg8hl Dec 27 12:11:36.960: INFO: Got endpoints: latency-svc-zg8hl [2.6999719s] Dec 27 12:11:37.013: INFO: Created: latency-svc-zl5vl Dec 27 12:11:37.013: INFO: Got endpoints: latency-svc-zl5vl [2.700441719s] Dec 27 12:11:37.064: INFO: Created: latency-svc-pj5kb Dec 27 12:11:37.187: INFO: Got endpoints: latency-svc-pj5kb [2.649813995s] Dec 27 12:11:37.201: INFO: Created: latency-svc-hzxh7 Dec 27 12:11:37.229: INFO: Got endpoints: latency-svc-hzxh7 [2.542474703s] Dec 27 12:11:37.273: INFO: Created: latency-svc-kgk64 Dec 27 12:11:37.487: INFO: Got endpoints: latency-svc-kgk64 [2.732098165s] Dec 27 12:11:37.523: INFO: Created: latency-svc-ndd8q Dec 27 12:11:37.772: INFO: Got endpoints: latency-svc-ndd8q [2.818382338s] Dec 27 12:11:37.789: INFO: Created: latency-svc-4l5b8 Dec 27 12:11:37.822: INFO: Got endpoints: latency-svc-4l5b8 [2.628253167s] Dec 27 12:11:38.014: INFO: Created: latency-svc-896gr Dec 27 12:11:38.047: INFO: Got endpoints: latency-svc-896gr [2.586065285s] Dec 27 12:11:38.307: INFO: Created: latency-svc-pf2tp Dec 27 12:11:38.340: INFO: Got endpoints: latency-svc-pf2tp [2.736546905s] Dec 27 12:11:38.556: INFO: Created: latency-svc-k88bx Dec 27 12:11:38.626: INFO: Got endpoints: latency-svc-k88bx [2.749257749s] Dec 27 12:11:38.855: INFO: Created: latency-svc-gv4rd Dec 27 12:11:39.011: INFO: Got endpoints: latency-svc-gv4rd [2.949687433s] Dec 27 12:11:39.116: INFO: Created: latency-svc-4gxf5 Dec 27 12:11:39.231: INFO: Got endpoints: latency-svc-4gxf5 [3.136976174s] Dec 27 12:11:39.296: INFO: Created: latency-svc-vm44f Dec 27 12:11:39.296: INFO: Got endpoints: latency-svc-vm44f [3.010967519s] Dec 27 12:11:39.438: INFO: Created: latency-svc-bsk49 Dec 27 12:11:39.642: INFO: Created: latency-svc-6kq5h Dec 27 12:11:39.657: INFO: Got endpoints: latency-svc-6kq5h [3.155877559s] Dec 27 12:11:39.683: INFO: Got endpoints: latency-svc-bsk49 [3.242339149s] Dec 27 12:11:39.753: INFO: Created: latency-svc-8kgwk Dec 27 12:11:39.833: INFO: Got endpoints: latency-svc-8kgwk [2.872828342s] Dec 27 12:11:39.936: INFO: Created: latency-svc-zqc8b Dec 27 12:11:40.049: INFO: Got endpoints: latency-svc-zqc8b [3.035916604s] Dec 27 12:11:40.079: INFO: Created: latency-svc-cl2ct Dec 27 12:11:40.109: INFO: Got endpoints: latency-svc-cl2ct [2.921973173s] Dec 27 12:11:40.149: INFO: Created: latency-svc-x9nwx Dec 27 12:11:40.238: INFO: Got endpoints: latency-svc-x9nwx [3.009011197s] Dec 27 12:11:40.293: INFO: Created: latency-svc-45mv9 Dec 27 12:11:40.313: INFO: Got endpoints: latency-svc-45mv9 [2.825845432s] Dec 27 12:11:40.493: INFO: Created: latency-svc-kmznw Dec 27 12:11:40.547: INFO: Got endpoints: latency-svc-kmznw [2.774938798s] Dec 27 12:11:40.673: INFO: Created: latency-svc-xnp85 Dec 27 12:11:40.698: INFO: Got endpoints: latency-svc-xnp85 [2.876006224s] Dec 27 12:11:40.750: INFO: Created: latency-svc-qd9xw Dec 27 12:11:40.841: INFO: Got endpoints: latency-svc-qd9xw [2.793290929s] Dec 27 12:11:40.951: INFO: Created: latency-svc-rblhx Dec 27 12:11:41.043: INFO: Got endpoints: latency-svc-rblhx [2.703098448s] Dec 27 12:11:41.075: INFO: Created: latency-svc-ktqf5 Dec 27 12:11:41.230: INFO: Created: latency-svc-wznq6 Dec 27 12:11:41.233: INFO: Got endpoints: latency-svc-ktqf5 [2.606990024s] Dec 27 12:11:41.249: INFO: Got endpoints: latency-svc-wznq6 [2.237206143s] Dec 27 12:11:41.305: INFO: Created: latency-svc-v82nw Dec 27 12:11:41.412: INFO: Got endpoints: latency-svc-v82nw [2.18099322s] Dec 27 12:11:41.428: INFO: Created: latency-svc-5gb9n Dec 27 12:11:41.437: INFO: Got endpoints: latency-svc-5gb9n [2.141328852s] Dec 27 12:11:41.497: INFO: Created: latency-svc-dnv7v Dec 27 12:11:41.506: INFO: Got endpoints: latency-svc-dnv7v [1.849055944s] Dec 27 12:11:41.642: INFO: Created: latency-svc-z9hrv Dec 27 12:11:41.678: INFO: Got endpoints: latency-svc-z9hrv [1.995038926s] Dec 27 12:11:41.690: INFO: Created: latency-svc-pgzw2 Dec 27 12:11:41.705: INFO: Got endpoints: latency-svc-pgzw2 [1.87123847s] Dec 27 12:11:41.813: INFO: Created: latency-svc-tg7dr Dec 27 12:11:41.855: INFO: Got endpoints: latency-svc-tg7dr [1.805538697s] Dec 27 12:11:41.860: INFO: Created: latency-svc-6cs7c Dec 27 12:11:41.868: INFO: Got endpoints: latency-svc-6cs7c [1.758534639s] Dec 27 12:11:42.012: INFO: Created: latency-svc-b62k5 Dec 27 12:11:42.029: INFO: Got endpoints: latency-svc-b62k5 [1.791713417s] Dec 27 12:11:42.086: INFO: Created: latency-svc-gh284 Dec 27 12:11:42.096: INFO: Got endpoints: latency-svc-gh284 [1.783457937s] Dec 27 12:11:42.280: INFO: Created: latency-svc-bnnqb Dec 27 12:11:42.298: INFO: Got endpoints: latency-svc-bnnqb [1.750755629s] Dec 27 12:11:42.469: INFO: Created: latency-svc-k6hlx Dec 27 12:11:42.518: INFO: Got endpoints: latency-svc-k6hlx [1.82009548s] Dec 27 12:11:42.663: INFO: Created: latency-svc-8qsp7 Dec 27 12:11:42.663: INFO: Got endpoints: latency-svc-8qsp7 [1.822514351s] Dec 27 12:11:42.729: INFO: Created: latency-svc-5f2tv Dec 27 12:11:42.799: INFO: Got endpoints: latency-svc-5f2tv [1.755609422s] Dec 27 12:11:42.857: INFO: Created: latency-svc-8pt6f Dec 27 12:11:42.992: INFO: Got endpoints: latency-svc-8pt6f [1.7596163s] Dec 27 12:11:43.019: INFO: Created: latency-svc-vkzvk Dec 27 12:11:43.049: INFO: Got endpoints: latency-svc-vkzvk [1.800628829s] Dec 27 12:11:43.204: INFO: Created: latency-svc-f5hvs Dec 27 12:11:43.220: INFO: Got endpoints: latency-svc-f5hvs [1.807870028s] Dec 27 12:11:43.299: INFO: Created: latency-svc-qs8vc Dec 27 12:11:43.408: INFO: Got endpoints: latency-svc-qs8vc [1.970459839s] Dec 27 12:11:43.426: INFO: Created: latency-svc-cvx8t Dec 27 12:11:43.449: INFO: Got endpoints: latency-svc-cvx8t [1.942525901s] Dec 27 12:11:43.595: INFO: Created: latency-svc-lns9w Dec 27 12:11:43.635: INFO: Got endpoints: latency-svc-lns9w [1.956571125s] Dec 27 12:11:43.783: INFO: Created: latency-svc-q6mf4 Dec 27 12:11:43.789: INFO: Got endpoints: latency-svc-q6mf4 [2.084699042s] Dec 27 12:11:44.687: INFO: Created: latency-svc-xsnmp Dec 27 12:11:44.879: INFO: Got endpoints: latency-svc-xsnmp [3.024115409s] Dec 27 12:11:45.177: INFO: Created: latency-svc-hfqq6 Dec 27 12:11:45.186: INFO: Got endpoints: latency-svc-hfqq6 [3.318237656s] Dec 27 12:11:45.431: INFO: Created: latency-svc-cs4px Dec 27 12:11:45.484: INFO: Got endpoints: latency-svc-cs4px [3.45398249s] Dec 27 12:11:45.550: INFO: Created: latency-svc-9znzj Dec 27 12:11:45.600: INFO: Got endpoints: latency-svc-9znzj [3.503272961s] Dec 27 12:11:45.648: INFO: Created: latency-svc-2cw9k Dec 27 12:11:45.708: INFO: Got endpoints: latency-svc-2cw9k [3.410300083s] Dec 27 12:11:45.847: INFO: Created: latency-svc-h4nxs Dec 27 12:11:45.866: INFO: Got endpoints: latency-svc-h4nxs [3.347765081s] Dec 27 12:11:46.033: INFO: Created: latency-svc-jrwb2 Dec 27 12:11:46.038: INFO: Got endpoints: latency-svc-jrwb2 [3.374290469s] Dec 27 12:11:46.100: INFO: Created: latency-svc-c6kxc Dec 27 12:11:46.270: INFO: Got endpoints: latency-svc-c6kxc [3.471269381s] Dec 27 12:11:46.292: INFO: Created: latency-svc-tdj2v Dec 27 12:11:46.360: INFO: Got endpoints: latency-svc-tdj2v [3.367171925s] Dec 27 12:11:46.520: INFO: Created: latency-svc-lxrsw Dec 27 12:11:46.586: INFO: Created: latency-svc-wvgnm Dec 27 12:11:46.590: INFO: Got endpoints: latency-svc-lxrsw [319.632029ms] Dec 27 12:11:46.598: INFO: Got endpoints: latency-svc-wvgnm [3.548270219s] Dec 27 12:11:46.722: INFO: Created: latency-svc-9sgcr Dec 27 12:11:46.748: INFO: Got endpoints: latency-svc-9sgcr [3.527863859s] Dec 27 12:11:46.913: INFO: Created: latency-svc-p448m Dec 27 12:11:46.929: INFO: Got endpoints: latency-svc-p448m [3.520663769s] Dec 27 12:11:46.974: INFO: Created: latency-svc-rmd72 Dec 27 12:11:47.093: INFO: Got endpoints: latency-svc-rmd72 [3.643976256s] Dec 27 12:11:47.108: INFO: Created: latency-svc-p9tqc Dec 27 12:11:47.136: INFO: Got endpoints: latency-svc-p9tqc [3.500802587s] Dec 27 12:11:47.202: INFO: Created: latency-svc-qlsvz Dec 27 12:11:47.279: INFO: Got endpoints: latency-svc-qlsvz [3.489017238s] Dec 27 12:11:47.320: INFO: Created: latency-svc-jjxgh Dec 27 12:11:47.323: INFO: Got endpoints: latency-svc-jjxgh [2.443676117s] Dec 27 12:11:47.389: INFO: Created: latency-svc-trx76 Dec 27 12:11:47.575: INFO: Got endpoints: latency-svc-trx76 [2.38877313s] Dec 27 12:11:47.583: INFO: Created: latency-svc-2rjtk Dec 27 12:11:47.635: INFO: Got endpoints: latency-svc-2rjtk [2.151263058s] Dec 27 12:11:47.643: INFO: Created: latency-svc-bv9pg Dec 27 12:11:47.655: INFO: Got endpoints: latency-svc-bv9pg [2.055544205s] Dec 27 12:11:47.814: INFO: Created: latency-svc-jlb8v Dec 27 12:11:47.838: INFO: Got endpoints: latency-svc-jlb8v [2.129883734s] Dec 27 12:11:47.902: INFO: Created: latency-svc-lf8ll Dec 27 12:11:47.997: INFO: Got endpoints: latency-svc-lf8ll [2.131165289s] Dec 27 12:11:48.040: INFO: Created: latency-svc-79bxj Dec 27 12:11:48.056: INFO: Got endpoints: latency-svc-79bxj [2.01832738s] Dec 27 12:11:48.257: INFO: Created: latency-svc-lfxb7 Dec 27 12:11:48.274: INFO: Got endpoints: latency-svc-lfxb7 [1.913747228s] Dec 27 12:11:48.410: INFO: Created: latency-svc-cbdcn Dec 27 12:11:48.410: INFO: Got endpoints: latency-svc-cbdcn [1.82044273s] Dec 27 12:11:48.488: INFO: Created: latency-svc-jdwfd Dec 27 12:11:48.618: INFO: Got endpoints: latency-svc-jdwfd [2.019825223s] Dec 27 12:11:48.772: INFO: Created: latency-svc-sxdwn Dec 27 12:11:48.817: INFO: Got endpoints: latency-svc-sxdwn [2.068725437s] Dec 27 12:11:48.935: INFO: Created: latency-svc-dzj8l Dec 27 12:11:48.980: INFO: Got endpoints: latency-svc-dzj8l [2.051219643s] Dec 27 12:11:49.138: INFO: Created: latency-svc-4zrp4 Dec 27 12:11:49.149: INFO: Created: latency-svc-hs6sn Dec 27 12:11:49.150: INFO: Got endpoints: latency-svc-4zrp4 [2.056671537s] Dec 27 12:11:49.184: INFO: Got endpoints: latency-svc-hs6sn [2.047548981s] Dec 27 12:11:49.370: INFO: Created: latency-svc-m89tt Dec 27 12:11:49.537: INFO: Got endpoints: latency-svc-m89tt [2.258054409s] Dec 27 12:11:49.550: INFO: Created: latency-svc-dw2mb Dec 27 12:11:49.562: INFO: Got endpoints: latency-svc-dw2mb [2.239010353s] Dec 27 12:11:49.754: INFO: Created: latency-svc-kb9fr Dec 27 12:11:49.769: INFO: Created: latency-svc-rqgbb Dec 27 12:11:49.850: INFO: Created: latency-svc-nsn8c Dec 27 12:11:50.008: INFO: Got endpoints: latency-svc-kb9fr [2.432275201s] Dec 27 12:11:50.119: INFO: Got endpoints: latency-svc-rqgbb [2.484148938s] Dec 27 12:11:50.136: INFO: Created: latency-svc-jzbf7 Dec 27 12:11:50.144: INFO: Got endpoints: latency-svc-nsn8c [2.488355136s] Dec 27 12:11:50.149: INFO: Got endpoints: latency-svc-jzbf7 [2.310559083s] Dec 27 12:11:50.370: INFO: Created: latency-svc-pw22p Dec 27 12:11:50.370: INFO: Got endpoints: latency-svc-pw22p [2.372904804s] Dec 27 12:11:50.538: INFO: Created: latency-svc-rxv5b Dec 27 12:11:50.570: INFO: Got endpoints: latency-svc-rxv5b [2.513499257s] Dec 27 12:11:50.793: INFO: Created: latency-svc-d2vhh Dec 27 12:11:50.816: INFO: Got endpoints: latency-svc-d2vhh [2.542640967s] Dec 27 12:11:50.861: INFO: Created: latency-svc-b4hkc Dec 27 12:11:50.970: INFO: Got endpoints: latency-svc-b4hkc [2.559628848s] Dec 27 12:11:51.003: INFO: Created: latency-svc-wxkth Dec 27 12:11:51.057: INFO: Got endpoints: latency-svc-wxkth [2.439062685s] Dec 27 12:11:51.060: INFO: Created: latency-svc-w2kd9 Dec 27 12:11:51.150: INFO: Got endpoints: latency-svc-w2kd9 [2.333360794s] Dec 27 12:11:51.194: INFO: Created: latency-svc-gntd4 Dec 27 12:11:51.194: INFO: Got endpoints: latency-svc-gntd4 [2.213999579s] Dec 27 12:11:51.261: INFO: Created: latency-svc-6vcpv Dec 27 12:11:51.392: INFO: Got endpoints: latency-svc-6vcpv [2.242199299s] Dec 27 12:11:51.436: INFO: Created: latency-svc-wtg9b Dec 27 12:11:51.455: INFO: Got endpoints: latency-svc-wtg9b [2.271563496s] Dec 27 12:11:51.658: INFO: Created: latency-svc-qn8rd Dec 27 12:11:51.678: INFO: Got endpoints: latency-svc-qn8rd [2.141189746s] Dec 27 12:11:51.739: INFO: Created: latency-svc-fx7lx Dec 27 12:11:51.835: INFO: Got endpoints: latency-svc-fx7lx [2.273081479s] Dec 27 12:11:51.852: INFO: Created: latency-svc-kvnbr Dec 27 12:11:51.873: INFO: Got endpoints: latency-svc-kvnbr [1.865434505s] Dec 27 12:11:52.021: INFO: Created: latency-svc-x74xq Dec 27 12:11:52.036: INFO: Got endpoints: latency-svc-x74xq [1.917263283s] Dec 27 12:11:52.115: INFO: Created: latency-svc-w924f Dec 27 12:11:52.232: INFO: Got endpoints: latency-svc-w924f [2.08851748s] Dec 27 12:11:52.293: INFO: Created: latency-svc-w6cdw Dec 27 12:11:52.317: INFO: Got endpoints: latency-svc-w6cdw [2.16805056s] Dec 27 12:11:52.581: INFO: Created: latency-svc-vpmrc Dec 27 12:11:52.825: INFO: Got endpoints: latency-svc-vpmrc [2.454442257s] Dec 27 12:11:52.839: INFO: Created: latency-svc-tx9d4 Dec 27 12:11:52.934: INFO: Created: latency-svc-mht5r Dec 27 12:11:53.225: INFO: Got endpoints: latency-svc-tx9d4 [2.655217401s] Dec 27 12:11:53.258: INFO: Got endpoints: latency-svc-mht5r [2.441945091s] Dec 27 12:11:53.658: INFO: Created: latency-svc-xxjkh Dec 27 12:11:53.658: INFO: Got endpoints: latency-svc-xxjkh [2.68772131s] Dec 27 12:11:53.915: INFO: Created: latency-svc-scp5l Dec 27 12:11:53.962: INFO: Got endpoints: latency-svc-scp5l [2.904748719s] Dec 27 12:11:54.186: INFO: Created: latency-svc-f97zx Dec 27 12:11:54.230: INFO: Got endpoints: latency-svc-f97zx [3.079132791s] Dec 27 12:11:54.433: INFO: Created: latency-svc-b8wvr Dec 27 12:11:54.617: INFO: Got endpoints: latency-svc-b8wvr [3.422577697s] Dec 27 12:11:54.621: INFO: Created: latency-svc-gcfkr Dec 27 12:11:54.641: INFO: Got endpoints: latency-svc-gcfkr [3.248299358s] Dec 27 12:11:54.688: INFO: Created: latency-svc-bf7ct Dec 27 12:11:54.815: INFO: Got endpoints: latency-svc-bf7ct [3.359393058s] Dec 27 12:11:54.832: INFO: Created: latency-svc-5p8lm Dec 27 12:11:54.902: INFO: Got endpoints: latency-svc-5p8lm [3.223686654s] Dec 27 12:11:55.029: INFO: Created: latency-svc-jcxwn Dec 27 12:11:55.046: INFO: Got endpoints: latency-svc-jcxwn [3.210193441s] Dec 27 12:11:55.083: INFO: Created: latency-svc-mtgbl Dec 27 12:11:55.231: INFO: Got endpoints: latency-svc-mtgbl [3.358114144s] Dec 27 12:11:55.251: INFO: Created: latency-svc-p45lq Dec 27 12:11:55.252: INFO: Got endpoints: latency-svc-p45lq [3.21581771s] Dec 27 12:11:55.386: INFO: Created: latency-svc-fbz7l Dec 27 12:11:55.412: INFO: Got endpoints: latency-svc-fbz7l [3.17931905s] Dec 27 12:11:55.575: INFO: Created: latency-svc-trrgv Dec 27 12:11:55.603: INFO: Got endpoints: latency-svc-trrgv [3.28577234s] Dec 27 12:11:55.649: INFO: Created: latency-svc-hddml Dec 27 12:11:55.669: INFO: Got endpoints: latency-svc-hddml [2.844440816s] Dec 27 12:11:55.785: INFO: Created: latency-svc-fsmpn Dec 27 12:11:55.799: INFO: Got endpoints: latency-svc-fsmpn [2.57393959s] Dec 27 12:11:55.962: INFO: Created: latency-svc-cx2dp Dec 27 12:11:55.984: INFO: Got endpoints: latency-svc-cx2dp [2.72570113s] Dec 27 12:11:56.048: INFO: Created: latency-svc-g2sj9 Dec 27 12:11:56.165: INFO: Got endpoints: latency-svc-g2sj9 [2.507035093s] Dec 27 12:11:56.241: INFO: Created: latency-svc-jxgbw Dec 27 12:11:56.391: INFO: Got endpoints: latency-svc-jxgbw [2.429430756s] Dec 27 12:11:56.411: INFO: Created: latency-svc-l6cj4 Dec 27 12:11:56.419: INFO: Got endpoints: latency-svc-l6cj4 [2.189429028s] Dec 27 12:11:56.622: INFO: Created: latency-svc-7lj9z Dec 27 12:11:56.645: INFO: Got endpoints: latency-svc-7lj9z [2.027861515s] Dec 27 12:11:57.230: INFO: Created: latency-svc-wwmj6 Dec 27 12:11:57.297: INFO: Got endpoints: latency-svc-wwmj6 [2.656233877s] Dec 27 12:11:57.425: INFO: Created: latency-svc-vj8nj Dec 27 12:11:57.441: INFO: Got endpoints: latency-svc-vj8nj [2.625641782s] Dec 27 12:11:57.624: INFO: Created: latency-svc-ckqw6 Dec 27 12:11:57.670: INFO: Got endpoints: latency-svc-ckqw6 [2.768227772s] Dec 27 12:11:57.726: INFO: Created: latency-svc-hjjwh Dec 27 12:11:57.836: INFO: Got endpoints: latency-svc-hjjwh [2.790074004s] Dec 27 12:11:57.901: INFO: Created: latency-svc-gck5n Dec 27 12:11:57.932: INFO: Got endpoints: latency-svc-gck5n [2.700462411s] Dec 27 12:11:58.062: INFO: Created: latency-svc-9fswv Dec 27 12:11:58.223: INFO: Got endpoints: latency-svc-9fswv [2.970276662s] Dec 27 12:11:58.235: INFO: Created: latency-svc-j9dsg Dec 27 12:11:58.258: INFO: Got endpoints: latency-svc-j9dsg [2.845892368s] Dec 27 12:11:58.407: INFO: Created: latency-svc-866xh Dec 27 12:11:58.430: INFO: Got endpoints: latency-svc-866xh [2.826697902s] Dec 27 12:11:58.613: INFO: Created: latency-svc-p28qp Dec 27 12:11:58.639: INFO: Got endpoints: latency-svc-p28qp [2.969656377s] Dec 27 12:11:58.706: INFO: Created: latency-svc-swkt7 Dec 27 12:11:58.773: INFO: Got endpoints: latency-svc-swkt7 [2.973319306s] Dec 27 12:11:58.782: INFO: Created: latency-svc-m6dx6 Dec 27 12:11:58.803: INFO: Got endpoints: latency-svc-m6dx6 [2.818873784s] Dec 27 12:11:58.847: INFO: Created: latency-svc-vxcsb Dec 27 12:11:58.982: INFO: Got endpoints: latency-svc-vxcsb [2.81707407s] Dec 27 12:11:59.010: INFO: Created: latency-svc-b27zd Dec 27 12:11:59.011: INFO: Got endpoints: latency-svc-b27zd [2.619383509s] Dec 27 12:11:59.085: INFO: Created: latency-svc-m785s Dec 27 12:11:59.164: INFO: Got endpoints: latency-svc-m785s [2.744250112s] Dec 27 12:11:59.186: INFO: Created: latency-svc-6z8lp Dec 27 12:11:59.200: INFO: Got endpoints: latency-svc-6z8lp [2.554833183s] Dec 27 12:11:59.260: INFO: Created: latency-svc-nmtnx Dec 27 12:11:59.363: INFO: Got endpoints: latency-svc-nmtnx [2.065468522s] Dec 27 12:11:59.392: INFO: Created: latency-svc-mwc8p Dec 27 12:11:59.410: INFO: Got endpoints: latency-svc-mwc8p [1.968992214s] Dec 27 12:11:59.631: INFO: Created: latency-svc-zf8l9 Dec 27 12:11:59.644: INFO: Got endpoints: latency-svc-zf8l9 [1.973094033s] Dec 27 12:11:59.771: INFO: Created: latency-svc-pm2vs Dec 27 12:11:59.798: INFO: Got endpoints: latency-svc-pm2vs [1.962329347s] Dec 27 12:11:59.863: INFO: Created: latency-svc-s4zq2 Dec 27 12:11:59.965: INFO: Got endpoints: latency-svc-s4zq2 [2.032967859s] Dec 27 12:12:00.038: INFO: Created: latency-svc-kq6qd Dec 27 12:12:00.054: INFO: Got endpoints: latency-svc-kq6qd [1.830647874s] Dec 27 12:12:00.183: INFO: Created: latency-svc-nswk9 Dec 27 12:12:00.205: INFO: Got endpoints: latency-svc-nswk9 [1.94729072s] Dec 27 12:12:00.392: INFO: Created: latency-svc-5pgc4 Dec 27 12:12:00.402: INFO: Got endpoints: latency-svc-5pgc4 [1.971903468s] Dec 27 12:12:00.469: INFO: Created: latency-svc-pcchk Dec 27 12:12:00.625: INFO: Got endpoints: latency-svc-pcchk [1.985400328s] Dec 27 12:12:00.699: INFO: Created: latency-svc-86g94 Dec 27 12:12:00.803: INFO: Got endpoints: latency-svc-86g94 [2.030147693s] Dec 27 12:12:00.879: INFO: Created: latency-svc-69h54 Dec 27 12:12:01.002: INFO: Got endpoints: latency-svc-69h54 [2.199321634s] Dec 27 12:12:01.019: INFO: Created: latency-svc-7p42x Dec 27 12:12:01.034: INFO: Got endpoints: latency-svc-7p42x [2.051553199s] Dec 27 12:12:01.187: INFO: Created: latency-svc-x67wb Dec 27 12:12:01.205: INFO: Got endpoints: latency-svc-x67wb [2.193926768s] Dec 27 12:12:01.377: INFO: Created: latency-svc-qd2gf Dec 27 12:12:01.407: INFO: Got endpoints: latency-svc-qd2gf [2.242835074s] Dec 27 12:12:01.457: INFO: Created: latency-svc-4v69b Dec 27 12:12:01.622: INFO: Got endpoints: latency-svc-4v69b [2.422186663s] Dec 27 12:12:01.632: INFO: Created: latency-svc-8zlrj Dec 27 12:12:01.650: INFO: Got endpoints: latency-svc-8zlrj [2.286774786s] Dec 27 12:12:01.690: INFO: Created: latency-svc-d8lnk Dec 27 12:12:01.822: INFO: Got endpoints: latency-svc-d8lnk [2.411891522s] Dec 27 12:12:01.911: INFO: Created: latency-svc-7gqbg Dec 27 12:12:02.056: INFO: Got endpoints: latency-svc-7gqbg [2.412596277s] Dec 27 12:12:02.837: INFO: Created: latency-svc-h5vf7 Dec 27 12:12:02.854: INFO: Got endpoints: latency-svc-h5vf7 [3.056222717s] Dec 27 12:12:03.032: INFO: Created: latency-svc-f9m6r Dec 27 12:12:03.049: INFO: Got endpoints: latency-svc-f9m6r [3.083424585s] Dec 27 12:12:03.086: INFO: Created: latency-svc-qd8ls Dec 27 12:12:03.104: INFO: Got endpoints: latency-svc-qd8ls [3.050415372s] Dec 27 12:12:03.327: INFO: Created: latency-svc-w5vw4 Dec 27 12:12:03.367: INFO: Got endpoints: latency-svc-w5vw4 [3.161199202s] Dec 27 12:12:03.610: INFO: Created: latency-svc-h5zzm Dec 27 12:12:03.619: INFO: Got endpoints: latency-svc-h5zzm [3.216504061s] Dec 27 12:12:03.799: INFO: Created: latency-svc-6bzhj Dec 27 12:12:03.835: INFO: Got endpoints: latency-svc-6bzhj [3.210078888s] Dec 27 12:12:03.838: INFO: Created: latency-svc-k2v4r Dec 27 12:12:03.952: INFO: Got endpoints: latency-svc-k2v4r [3.148815314s] Dec 27 12:12:03.968: INFO: Created: latency-svc-hqwz7 Dec 27 12:12:03.973: INFO: Got endpoints: latency-svc-hqwz7 [2.970469427s] Dec 27 12:12:03.973: INFO: Latencies: [263.668527ms 319.632029ms 363.211015ms 570.706447ms 783.963979ms 809.656315ms 1.021711239s 1.261359003s 1.427623714s 1.466868943s 1.717962143s 1.750755629s 1.755609422s 1.758534639s 1.7596163s 1.783457937s 1.791713417s 1.800628829s 1.805538697s 1.807870028s 1.82009548s 1.82044273s 1.822514351s 1.830647874s 1.849055944s 1.865434505s 1.868143754s 1.87123847s 1.913747228s 1.917263283s 1.937827721s 1.942525901s 1.94729072s 1.956571125s 1.962329347s 1.968992214s 1.970459839s 1.971903468s 1.973094033s 1.985400328s 1.995038926s 2.01832738s 2.019825223s 2.027861515s 2.030147693s 2.032967859s 2.047548981s 2.051219643s 2.051553199s 2.055544205s 2.056671537s 2.065468522s 2.068725437s 2.083069909s 2.084699042s 2.08851748s 2.129883734s 2.131165289s 2.141189746s 2.141328852s 2.151263058s 2.16805056s 2.18099322s 2.189429028s 2.193926768s 2.199321634s 2.213999579s 2.237206143s 2.239010353s 2.242199299s 2.242835074s 2.258054409s 2.271563496s 2.273081479s 2.286774786s 2.310559083s 2.329068544s 2.333360794s 2.345738611s 2.372904804s 2.38456687s 2.38877313s 2.411891522s 2.412596277s 2.422186663s 2.429430756s 2.432275201s 2.439062685s 2.441945091s 2.443676117s 2.454442257s 2.484148938s 2.488355136s 2.492854902s 2.50541159s 2.507035093s 2.513499257s 2.542474703s 2.542640967s 2.554833183s 2.559628848s 2.564688626s 2.57393959s 2.586065285s 2.606990024s 2.614473946s 2.619383509s 2.625641782s 2.627850215s 2.628253167s 2.638342965s 2.649813995s 2.655217401s 2.656233877s 2.671159418s 2.68772131s 2.6999719s 2.700441719s 2.700462411s 2.703098448s 2.72570113s 2.732098165s 2.736546905s 2.744250112s 2.749257749s 2.768227772s 2.774938798s 2.790074004s 2.793290929s 2.793896491s 2.810980185s 2.814930039s 2.81707407s 2.818382338s 2.818873784s 2.825845432s 2.826697902s 2.826952184s 2.835408075s 2.844440816s 2.845892368s 2.869032879s 2.872828342s 2.873387307s 2.874343109s 2.876006224s 2.904748719s 2.921973173s 2.949687433s 2.969656377s 2.970276662s 2.970469427s 2.970749142s 2.973319306s 2.99964616s 3.009011197s 3.010967519s 3.013354371s 3.024115409s 3.026424186s 3.03097035s 3.035916604s 3.048165486s 3.050415372s 3.053065768s 3.056222717s 3.079132791s 3.083424585s 3.111249847s 3.136976174s 3.148815314s 3.155877559s 3.157316065s 3.161199202s 3.17931905s 3.210078888s 3.210193441s 3.21581771s 3.216504061s 3.223686654s 3.242339149s 3.248299358s 3.28577234s 3.318237656s 3.347765081s 3.358114144s 3.359393058s 3.367171925s 3.374290469s 3.410300083s 3.422577697s 3.45398249s 3.471269381s 3.489017238s 3.500802587s 3.503272961s 3.520663769s 3.527863859s 3.548270219s 3.643976256s] Dec 27 12:12:03.974: INFO: 50 %ile: 2.559628848s Dec 27 12:12:03.974: INFO: 90 %ile: 3.242339149s Dec 27 12:12:03.974: INFO: 99 %ile: 3.548270219s Dec 27 12:12:03.974: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:12:03.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-x2ltv" for this suite. Dec 27 12:12:56.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:12:56.083: INFO: namespace: e2e-tests-svc-latency-x2ltv, resource: bindings, ignored listing per whitelist Dec 27 12:12:56.205: INFO: namespace e2e-tests-svc-latency-x2ltv deletion completed in 52.225051905s • [SLOW TEST:97.299 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:12:56.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pt9sr [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-pt9sr STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-pt9sr Dec 27 12:12:56.660: INFO: Found 0 stateful pods, waiting for 1 Dec 27 12:13:06.696: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 27 12:13:06.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 12:13:07.368: INFO: stderr: "" Dec 27 12:13:07.368: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 12:13:07.368: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 27 12:13:07.384: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 27 12:13:17.398: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 27 12:13:17.398: INFO: Waiting for statefulset status.replicas updated to 0 Dec 27 12:13:17.480: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:17.480: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:17.480: INFO: Dec 27 12:13:17.480: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 27 12:13:18.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987021034s Dec 27 12:13:19.927: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.653856139s Dec 27 12:13:20.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.540694688s Dec 27 12:13:21.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.525987405s Dec 27 12:13:23.025: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.489680138s Dec 27 12:13:24.051: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.442225386s Dec 27 12:13:25.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.416070608s Dec 27 12:13:26.818: INFO: Verifying statefulset ss doesn't scale past 3 for another 826.558421ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-pt9sr Dec 27 12:13:28.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:13:29.517: INFO: stderr: "" Dec 27 12:13:29.517: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 27 12:13:29.517: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 27 12:13:29.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:13:29.715: INFO: rc: 1 Dec 27 12:13:29.715: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0013f1c80 exit status 1 true [0xc000b2a0d0 0xc000b2a0e8 0xc000b2a100] [0xc000b2a0d0 0xc000b2a0e8 0xc000b2a100] [0xc000b2a0e0 0xc000b2a0f8] [0x935700 0x935700] 0xc0014a10e0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 27 12:13:39.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:13:40.300: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Dec 27 12:13:40.300: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 27 12:13:40.300: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 27 12:13:40.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:13:40.899: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Dec 27 12:13:40.899: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 27 12:13:40.899: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 27 12:13:40.934: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 27 12:13:40.934: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 27 12:13:40.934: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 27 12:13:40.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 12:13:41.545: INFO: stderr: "" Dec 27 12:13:41.545: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 12:13:41.545: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 27 12:13:41.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 12:13:41.983: INFO: stderr: "" Dec 27 12:13:41.983: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 12:13:41.983: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 27 12:13:41.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 27 12:13:42.995: INFO: stderr: "" Dec 27 12:13:42.995: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 27 12:13:42.995: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 27 12:13:42.995: INFO: Waiting for statefulset status.replicas updated to 0 Dec 27 12:13:43.025: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 27 12:13:43.025: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 27 12:13:43.025: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 27 12:13:43.100: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:43.100: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:43.100: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:43.100: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:43.100: INFO: Dec 27 12:13:43.100: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 27 12:13:44.909: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:44.909: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:44.909: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:44.910: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:44.910: INFO: Dec 27 12:13:44.910: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 27 12:13:46.036: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:46.036: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:46.036: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:46.036: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:46.036: INFO: Dec 27 12:13:46.036: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 27 12:13:47.913: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:47.913: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:47.913: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:47.913: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:47.913: INFO: Dec 27 12:13:47.913: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 27 12:13:48.931: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:48.931: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:48.931: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:48.931: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:48.931: INFO: Dec 27 12:13:48.931: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 27 12:13:49.953: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:49.953: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:49.953: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:49.953: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:49.953: INFO: Dec 27 12:13:49.953: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 27 12:13:50.972: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:50.972: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:50.972: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:50.972: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:50.972: INFO: Dec 27 12:13:50.972: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 27 12:13:51.990: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:51.990: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:51.990: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:51.990: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:51.990: INFO: Dec 27 12:13:51.990: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 27 12:13:53.001: INFO: POD NODE PHASE GRACE CONDITIONS Dec 27 12:13:53.001: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:12:56 +0000 UTC }] Dec 27 12:13:53.001: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:53.002: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:13:17 +0000 UTC }] Dec 27 12:13:53.002: INFO: Dec 27 12:13:53.002: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-pt9sr Dec 27 12:13:54.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:13:54.167: INFO: rc: 1 Dec 27 12:13:54.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000fe50b0 exit status 1 true [0xc001613160 0xc001613178 0xc001613190] [0xc001613160 0xc001613178 0xc001613190] [0xc001613170 0xc001613188] [0x935700 0x935700] 0xc002040720 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 27 12:14:04.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:14:04.276: INFO: rc: 1 Dec 27 12:14:04.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a120 exit status 1 true [0xc000a28038 0xc000a28138 0xc000a281e8] [0xc000a28038 0xc000a28138 0xc000a281e8] [0xc000a28060 0xc000a281b0] [0x935700 0x935700] 0xc001a3c540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:14:14.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:14:14.374: INFO: rc: 1 Dec 27 12:14:14.374: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a240 exit status 1 true [0xc000a281f0 0xc000a28288 0xc000a283a8] [0xc000a281f0 0xc000a28288 0xc000a283a8] [0xc000a28220 0xc000a28318] [0x935700 0x935700] 0xc001a3ce40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:14:24.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:14:24.525: INFO: rc: 1 Dec 27 12:14:24.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a360 exit status 1 true [0xc000a283b0 0xc000a284b0 0xc000a28598] [0xc000a283b0 0xc000a284b0 0xc000a28598] [0xc000a28480 0xc000a28590] [0x935700 0x935700] 0xc001a3d440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:14:34.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:14:34.670: INFO: rc: 1 Dec 27 12:14:34.671: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019d07e0 exit status 1 true [0xc001cd6000 0xc001cd6018 0xc001cd6030] [0xc001cd6000 0xc001cd6018 0xc001cd6030] [0xc001cd6010 0xc001cd6028] [0x935700 0x935700] 0xc001bf42a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:14:44.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:14:44.803: INFO: rc: 1 Dec 27 12:14:44.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee2120 exit status 1 true [0xc001b84000 0xc001b84018 0xc001b84030] [0xc001b84000 0xc001b84018 0xc001b84030] [0xc001b84010 0xc001b84028] [0x935700 0x935700] 0xc00208c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:14:54.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:14:54.929: INFO: rc: 1 Dec 27 12:14:54.930: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee2240 exit status 1 true [0xc001b84038 0xc001b84050 0xc001b84068] [0xc001b84038 0xc001b84050 0xc001b84068] [0xc001b84048 0xc001b84060] [0x935700 0x935700] 0xc00208cb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:15:04.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:15:05.035: INFO: rc: 1 Dec 27 12:15:05.035: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e72450 exit status 1 true [0xc000b2a010 0xc000b2a028 0xc000b2a040] [0xc000b2a010 0xc000b2a028 0xc000b2a040] [0xc000b2a020 0xc000b2a038] [0x935700 0x935700] 0xc001964420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:15:15.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:15:15.179: INFO: rc: 1 Dec 27 12:15:15.179: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e72660 exit status 1 true [0xc000b2a048 0xc000b2a060 0xc000b2a078] [0xc000b2a048 0xc000b2a060 0xc000b2a078] [0xc000b2a058 0xc000b2a070] [0x935700 0x935700] 0xc001964cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:15:25.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:15:25.378: INFO: rc: 1 Dec 27 12:15:25.378: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019d0a50 exit status 1 true [0xc001cd6038 0xc001cd6050 0xc001cd6068] [0xc001cd6038 0xc001cd6050 0xc001cd6068] [0xc001cd6048 0xc001cd6060] [0x935700 0x935700] 0xc001bf4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:15:35.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:15:35.574: INFO: rc: 1 Dec 27 12:15:35.574: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019d0c30 exit status 1 true [0xc001cd6070 0xc001cd6088 0xc001cd60a0] [0xc001cd6070 0xc001cd6088 0xc001cd60a0] [0xc001cd6080 0xc001cd6098] [0x935700 0x935700] 0xc001bf47e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:15:45.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:15:45.701: INFO: rc: 1 Dec 27 12:15:45.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a4b0 exit status 1 true [0xc000a28990 0xc000a28a48 0xc000a28b10] [0xc000a28990 0xc000a28a48 0xc000a28b10] [0xc000a28a20 0xc000a28b08] [0x935700 0x935700] 0xc001558660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:15:55.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:15:55.861: INFO: rc: 1 Dec 27 12:15:55.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee23c0 exit status 1 true [0xc001b84070 0xc001b84088 0xc001b840a0] [0xc001b84070 0xc001b84088 0xc001b840a0] [0xc001b84080 0xc001b84098] [0x935700 0x935700] 0xc00208d500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:16:05.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:16:06.069: INFO: rc: 1 Dec 27 12:16:06.069: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e727e0 exit status 1 true [0xc000b2a088 0xc000b2a0a0 0xc000b2a0b8] [0xc000b2a088 0xc000b2a0a0 0xc000b2a0b8] [0xc000b2a098 0xc000b2a0b0] [0x935700 0x935700] 0xc001088a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:16:16.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:16:16.198: INFO: rc: 1 Dec 27 12:16:16.199: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019d0870 exit status 1 true [0xc00016e000 0xc000b2a020 0xc000b2a038] [0xc00016e000 0xc000b2a020 0xc000b2a038] [0xc000b2a018 0xc000b2a030] [0x935700 0x935700] 0xc001964420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:16:26.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:16:26.344: INFO: rc: 1 Dec 27 12:16:26.344: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee2150 exit status 1 true [0xc001cd6000 0xc001cd6018 0xc001cd6030] [0xc001cd6000 0xc001cd6018 0xc001cd6030] [0xc001cd6010 0xc001cd6028] [0x935700 0x935700] 0xc001a3c540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:16:36.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:16:36.509: INFO: rc: 1 Dec 27 12:16:36.510: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a150 exit status 1 true [0xc001b84000 0xc001b84018 0xc001b84030] [0xc001b84000 0xc001b84018 0xc001b84030] [0xc001b84010 0xc001b84028] [0x935700 0x935700] 0xc001089b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:16:46.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:16:46.641: INFO: rc: 1 Dec 27 12:16:46.641: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a2a0 exit status 1 true [0xc001b84038 0xc001b84050 0xc001b84068] [0xc001b84038 0xc001b84050 0xc001b84068] [0xc001b84048 0xc001b84060] [0x935700 0x935700] 0xc001bf42a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:16:56.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:16:56.759: INFO: rc: 1 Dec 27 12:16:56.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee2330 exit status 1 true [0xc001cd6038 0xc001cd6050 0xc001cd6068] [0xc001cd6038 0xc001cd6050 0xc001cd6068] [0xc001cd6048 0xc001cd6060] [0x935700 0x935700] 0xc001a3ce40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:17:06.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:17:06.914: INFO: rc: 1 Dec 27 12:17:06.914: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee24b0 exit status 1 true [0xc001cd6070 0xc001cd6088 0xc001cd60a0] [0xc001cd6070 0xc001cd6088 0xc001cd60a0] [0xc001cd6080 0xc001cd6098] [0x935700 0x935700] 0xc001a3d440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:17:16.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:17:17.019: INFO: rc: 1 Dec 27 12:17:17.019: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a420 exit status 1 true [0xc001b84070 0xc001b84088 0xc001b840a0] [0xc001b84070 0xc001b84088 0xc001b840a0] [0xc001b84080 0xc001b84098] [0x935700 0x935700] 0xc001bf4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:17:27.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:17:27.176: INFO: rc: 1 Dec 27 12:17:27.177: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a5d0 exit status 1 true [0xc001b840a8 0xc001b840c0 0xc001b840d8] [0xc001b840a8 0xc001b840c0 0xc001b840d8] [0xc001b840b8 0xc001b840d0] [0x935700 0x935700] 0xc001bf47e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:17:37.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:17:37.314: INFO: rc: 1 Dec 27 12:17:37.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a720 exit status 1 true [0xc001b840e0 0xc001b840f8 0xc001b84110] [0xc001b840e0 0xc001b840f8 0xc001b84110] [0xc001b840f0 0xc001b84108] [0x935700 0x935700] 0xc001bf5f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:17:47.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:17:47.415: INFO: rc: 1 Dec 27 12:17:47.415: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00267a840 exit status 1 true [0xc001b84118 0xc001b84130 0xc001b84148] [0xc001b84118 0xc001b84130 0xc001b84148] [0xc001b84128 0xc001b84140] [0x935700 0x935700] 0xc00208c540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:17:57.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:17:57.699: INFO: rc: 1 Dec 27 12:17:57.699: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e72330 exit status 1 true [0xc000a28038 0xc000a28138 0xc000a281e8] [0xc000a28038 0xc000a28138 0xc000a281e8] [0xc000a28060 0xc000a281b0] [0x935700 0x935700] 0xc001558a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:18:07.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:18:07.879: INFO: rc: 1 Dec 27 12:18:07.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019d07e0 exit status 1 true [0xc000b2a010 0xc000b2a028 0xc000b2a040] [0xc000b2a010 0xc000b2a028 0xc000b2a040] [0xc000b2a020 0xc000b2a038] [0x935700 0x935700] 0xc001bf42a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:18:17.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:18:18.072: INFO: rc: 1 Dec 27 12:18:18.072: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee2120 exit status 1 true [0xc001cd6000 0xc001cd6018 0xc001cd6030] [0xc001cd6000 0xc001cd6018 0xc001cd6030] [0xc001cd6010 0xc001cd6028] [0x935700 0x935700] 0xc001089380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:18:28.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:18:28.234: INFO: rc: 1 Dec 27 12:18:28.234: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019d0ab0 exit status 1 true [0xc000b2a048 0xc000b2a060 0xc000b2a078] [0xc000b2a048 0xc000b2a060 0xc000b2a078] [0xc000b2a058 0xc000b2a070] [0x935700 0x935700] 0xc001bf4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:18:38.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:18:38.416: INFO: rc: 1 Dec 27 12:18:38.416: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e72360 exit status 1 true [0xc001b84000 0xc001b84018 0xc001b84030] [0xc001b84000 0xc001b84018 0xc001b84030] [0xc001b84010 0xc001b84028] [0x935700 0x935700] 0xc00208c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:18:48.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:18:48.533: INFO: rc: 1 Dec 27 12:18:48.533: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee22a0 exit status 1 true [0xc001cd6038 0xc001cd6050 0xc001cd6068] [0xc001cd6038 0xc001cd6050 0xc001cd6068] [0xc001cd6048 0xc001cd6060] [0x935700 0x935700] 0xc001964240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 27 12:18:58.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pt9sr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 27 12:18:58.668: INFO: rc: 1 Dec 27 12:18:58.668: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Dec 27 12:18:58.668: INFO: Scaling statefulset ss to 0 Dec 27 12:18:58.691: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 27 12:18:58.693: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pt9sr Dec 27 12:18:58.696: INFO: Scaling statefulset ss to 0 Dec 27 12:18:58.705: INFO: Waiting for statefulset status.replicas updated to 0 Dec 27 12:18:58.713: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:18:58.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pt9sr" for this suite. Dec 27 12:19:06.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:19:07.104: INFO: namespace: e2e-tests-statefulset-pt9sr, resource: bindings, ignored listing per whitelist Dec 27 12:19:07.207: INFO: namespace e2e-tests-statefulset-pt9sr deletion completed in 8.380057422s • [SLOW TEST:371.001 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:19:07.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-5j97 STEP: Creating a pod to test atomic-volume-subpath Dec 27 12:19:07.471: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5j97" in namespace "e2e-tests-subpath-lslk5" to be "success or failure" Dec 27 12:19:07.490: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Pending", Reason="", readiness=false. Elapsed: 19.056011ms Dec 27 12:19:09.506: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034966047s Dec 27 12:19:11.514: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042887764s Dec 27 12:19:13.881: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410589122s Dec 27 12:19:16.258: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.786746415s Dec 27 12:19:18.272: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Pending", Reason="", readiness=false. Elapsed: 10.800821197s Dec 27 12:19:20.286: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Pending", Reason="", readiness=false. Elapsed: 12.815508858s Dec 27 12:19:22.424: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Pending", Reason="", readiness=false. Elapsed: 14.953022776s Dec 27 12:19:24.447: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Running", Reason="", readiness=false. Elapsed: 16.975993644s Dec 27 12:19:26.488: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Running", Reason="", readiness=false. Elapsed: 19.017116777s Dec 27 12:19:28.535: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Running", Reason="", readiness=false. Elapsed: 21.064707938s Dec 27 12:19:30.574: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Running", Reason="", readiness=false. Elapsed: 23.10324685s Dec 27 12:19:32.603: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Running", Reason="", readiness=false. Elapsed: 25.132489289s Dec 27 12:19:34.621: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Running", Reason="", readiness=false. Elapsed: 27.150385325s Dec 27 12:19:36.650: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Running", Reason="", readiness=false. Elapsed: 29.179174136s Dec 27 12:19:38.669: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Running", Reason="", readiness=false. Elapsed: 31.197882503s Dec 27 12:19:40.698: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Running", Reason="", readiness=false. Elapsed: 33.227360766s Dec 27 12:19:42.713: INFO: Pod "pod-subpath-test-downwardapi-5j97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.242597881s STEP: Saw pod success Dec 27 12:19:42.713: INFO: Pod "pod-subpath-test-downwardapi-5j97" satisfied condition "success or failure" Dec 27 12:19:42.720: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-5j97 container test-container-subpath-downwardapi-5j97: STEP: delete the pod Dec 27 12:19:42.981: INFO: Waiting for pod pod-subpath-test-downwardapi-5j97 to disappear Dec 27 12:19:43.004: INFO: Pod pod-subpath-test-downwardapi-5j97 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5j97 Dec 27 12:19:43.005: INFO: Deleting pod "pod-subpath-test-downwardapi-5j97" in namespace "e2e-tests-subpath-lslk5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:19:43.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lslk5" for this suite. Dec 27 12:19:49.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:19:49.201: INFO: namespace: e2e-tests-subpath-lslk5, resource: bindings, ignored listing per whitelist Dec 27 12:19:49.346: INFO: namespace e2e-tests-subpath-lslk5 deletion completed in 6.30870706s • [SLOW TEST:42.138 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:19:49.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-2d6538df-28a3-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 12:19:49.675: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-jf49w" to be "success or failure" Dec 27 12:19:49.689: INFO: Pod "pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.94807ms Dec 27 12:19:51.935: INFO: Pod "pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25943311s Dec 27 12:19:53.965: INFO: Pod "pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289825069s Dec 27 12:19:55.984: INFO: Pod "pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30931039s Dec 27 12:19:58.039: INFO: Pod "pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.364059108s Dec 27 12:20:00.250: INFO: Pod "pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.574364936s STEP: Saw pod success Dec 27 12:20:00.250: INFO: Pod "pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:20:00.259: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 27 12:20:00.648: INFO: Waiting for pod pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005 to disappear Dec 27 12:20:00.783: INFO: Pod pod-projected-secrets-2d7edf3b-28a3-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:20:00.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jf49w" for this suite. Dec 27 12:20:06.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:20:07.016: INFO: namespace: e2e-tests-projected-jf49w, resource: bindings, ignored listing per whitelist Dec 27 12:20:07.068: INFO: namespace e2e-tests-projected-jf49w deletion completed in 6.273060476s • [SLOW TEST:17.721 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:20:07.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 27 12:20:07.263: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:20:22.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-48ctn" for this suite. Dec 27 12:20:28.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:20:28.923: INFO: namespace: e2e-tests-init-container-48ctn, resource: bindings, ignored listing per whitelist Dec 27 12:20:28.962: INFO: namespace e2e-tests-init-container-48ctn deletion completed in 6.228288298s • [SLOW TEST:21.894 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:20:28.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-x2xmc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-x2xmc to expose endpoints map[] Dec 27 12:20:29.342: INFO: Get endpoints failed (5.749296ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Dec 27 12:20:30.390: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-x2xmc exposes endpoints map[] (1.053945446s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-x2xmc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-x2xmc to expose endpoints map[pod1:[100]] Dec 27 12:20:34.682: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.2616159s elapsed, will retry) Dec 27 12:20:40.505: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-x2xmc exposes endpoints map[pod1:[100]] (10.084494174s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-x2xmc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-x2xmc to expose endpoints map[pod2:[101] pod1:[100]] Dec 27 12:20:44.874: INFO: Unexpected endpoints: found map[45cea811-28a3-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.334471417s elapsed, will retry) Dec 27 12:20:50.742: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-x2xmc exposes endpoints map[pod1:[100] pod2:[101]] (10.201934496s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-x2xmc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-x2xmc to expose endpoints map[pod2:[101]] Dec 27 12:20:52.196: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-x2xmc exposes endpoints map[pod2:[101]] (1.438288335s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-x2xmc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-x2xmc to expose endpoints map[] Dec 27 12:20:53.776: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-x2xmc exposes endpoints map[] (1.293079282s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:20:54.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-x2xmc" for this suite. Dec 27 12:21:18.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:21:18.605: INFO: namespace: e2e-tests-services-x2xmc, resource: bindings, ignored listing per whitelist Dec 27 12:21:18.794: INFO: namespace e2e-tests-services-x2xmc deletion completed in 24.373141118s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:49.832 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:21:18.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:21:19.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-7jjqc" for this suite. Dec 27 12:21:25.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:21:25.220: INFO: namespace: e2e-tests-services-7jjqc, resource: bindings, ignored listing per whitelist Dec 27 12:21:25.276: INFO: namespace e2e-tests-services-7jjqc deletion completed in 6.18493125s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.481 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:21:25.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:21:37.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xqhk8" for this suite. Dec 27 12:21:44.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:21:44.342: INFO: namespace: e2e-tests-kubelet-test-xqhk8, resource: bindings, ignored listing per whitelist Dec 27 12:21:44.374: INFO: namespace e2e-tests-kubelet-test-xqhk8 deletion completed in 6.688795699s • [SLOW TEST:19.098 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:21:44.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-fjgbh/configmap-test-720cb30e-28a3-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 27 12:21:44.866: INFO: Waiting up to 5m0s for pod "pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005" in namespace "e2e-tests-configmap-fjgbh" to be "success or failure" Dec 27 12:21:44.878: INFO: Pod "pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.785827ms Dec 27 12:21:47.014: INFO: Pod "pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148144513s Dec 27 12:21:49.032: INFO: Pod "pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165917145s Dec 27 12:21:51.046: INFO: Pod "pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18019467s Dec 27 12:21:53.057: INFO: Pod "pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190567434s Dec 27 12:21:55.286: INFO: Pod "pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.419689965s STEP: Saw pod success Dec 27 12:21:55.286: INFO: Pod "pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:21:55.296: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005 container env-test: STEP: delete the pod Dec 27 12:21:55.863: INFO: Waiting for pod pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005 to disappear Dec 27 12:21:55.891: INFO: Pod pod-configmaps-7211c1da-28a3-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:21:55.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fjgbh" for this suite. Dec 27 12:22:01.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:22:02.143: INFO: namespace: e2e-tests-configmap-fjgbh, resource: bindings, ignored listing per whitelist Dec 27 12:22:02.153: INFO: namespace e2e-tests-configmap-fjgbh deletion completed in 6.246023533s • [SLOW TEST:17.778 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:22:02.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7c979498-28a3-11ea-bad5-0242ac110005 STEP: Creating a pod to test consume secrets Dec 27 12:22:02.357: INFO: Waiting up to 5m0s for pod "pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005" in namespace "e2e-tests-secrets-bzt95" to be "success or failure" Dec 27 12:22:02.392: INFO: Pod "pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.206392ms Dec 27 12:22:04.659: INFO: Pod "pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302342967s Dec 27 12:22:06.680: INFO: Pod "pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323516726s Dec 27 12:22:08.765: INFO: Pod "pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408557699s Dec 27 12:22:10.791: INFO: Pod "pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.433902932s Dec 27 12:22:12.836: INFO: Pod "pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.479450689s STEP: Saw pod success Dec 27 12:22:12.836: INFO: Pod "pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:22:12.867: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 27 12:22:12.979: INFO: Waiting for pod pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005 to disappear Dec 27 12:22:12.986: INFO: Pod pod-secrets-7c98708b-28a3-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:22:12.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bzt95" for this suite. Dec 27 12:22:19.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:22:19.118: INFO: namespace: e2e-tests-secrets-bzt95, resource: bindings, ignored listing per whitelist Dec 27 12:22:19.147: INFO: namespace e2e-tests-secrets-bzt95 deletion completed in 6.153649675s • [SLOW TEST:16.994 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:22:19.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:22:29.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-vc852" for this suite. Dec 27 12:23:13.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:23:13.662: INFO: namespace: e2e-tests-kubelet-test-vc852, resource: bindings, ignored listing per whitelist Dec 27 12:23:13.845: INFO: namespace e2e-tests-kubelet-test-vc852 deletion completed in 44.360813484s • [SLOW TEST:54.696 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:23:13.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 27 12:23:14.285: INFO: Waiting up to 5m0s for pod "pod-a778027a-28a3-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-9gxss" to be "success or failure" Dec 27 12:23:14.298: INFO: Pod "pod-a778027a-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.682128ms Dec 27 12:23:16.313: INFO: Pod "pod-a778027a-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02864756s Dec 27 12:23:18.340: INFO: Pod "pod-a778027a-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054996779s Dec 27 12:23:20.355: INFO: Pod "pod-a778027a-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069824409s Dec 27 12:23:22.370: INFO: Pod "pod-a778027a-28a3-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08543327s Dec 27 12:23:24.402: INFO: Pod "pod-a778027a-28a3-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11691308s STEP: Saw pod success Dec 27 12:23:24.402: INFO: Pod "pod-a778027a-28a3-11ea-bad5-0242ac110005" satisfied condition "success or failure" Dec 27 12:23:24.428: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a778027a-28a3-11ea-bad5-0242ac110005 container test-container: STEP: delete the pod Dec 27 12:23:24.654: INFO: Waiting for pod pod-a778027a-28a3-11ea-bad5-0242ac110005 to disappear Dec 27 12:23:24.671: INFO: Pod pod-a778027a-28a3-11ea-bad5-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:23:24.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9gxss" for this suite. Dec 27 12:23:30.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:23:30.901: INFO: namespace: e2e-tests-emptydir-9gxss, resource: bindings, ignored listing per whitelist Dec 27 12:23:30.941: INFO: namespace e2e-tests-emptydir-9gxss deletion completed in 6.255755417s • [SLOW TEST:17.095 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:23:30.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 27 12:23:31.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4wc2l' Dec 27 12:23:33.395: INFO: stderr: "" Dec 27 12:23:33.395: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Dec 27 12:23:33.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4wc2l' Dec 27 12:23:38.988: INFO: stderr: "" Dec 27 12:23:38.988: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 27 12:23:38.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4wc2l" for this suite. Dec 27 12:23:45.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 27 12:23:45.205: INFO: namespace: e2e-tests-kubectl-4wc2l, resource: bindings, ignored listing per whitelist Dec 27 12:23:45.251: INFO: namespace e2e-tests-kubectl-4wc2l deletion completed in 6.162787574s • [SLOW TEST:14.310 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 27 12:23:45.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 27 12:23:45.469: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 21.673552ms)
Dec 27 12:23:45.480: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.933411ms)
Dec 27 12:23:45.489: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.503022ms)
Dec 27 12:23:45.498: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.487089ms)
Dec 27 12:23:45.507: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.48277ms)
Dec 27 12:23:45.515: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.046215ms)
Dec 27 12:23:45.527: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.022702ms)
Dec 27 12:23:45.595: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 67.922452ms)
Dec 27 12:23:45.627: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.129492ms)
Dec 27 12:23:45.642: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.146615ms)
Dec 27 12:23:45.662: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.535303ms)
Dec 27 12:23:45.683: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.466754ms)
Dec 27 12:23:45.707: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.541786ms)
Dec 27 12:23:45.718: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.013873ms)
Dec 27 12:23:45.728: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.398939ms)
Dec 27 12:23:45.736: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.430016ms)
Dec 27 12:23:45.747: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.733431ms)
Dec 27 12:23:45.774: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.575601ms)
Dec 27 12:23:45.789: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.478405ms)
Dec 27 12:23:45.801: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.099686ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:23:45.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-8tx6k" for this suite.
Dec 27 12:23:51.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:23:52.014: INFO: namespace: e2e-tests-proxy-8tx6k, resource: bindings, ignored listing per whitelist
Dec 27 12:23:52.078: INFO: namespace e2e-tests-proxy-8tx6k deletion completed in 6.268512303s

• [SLOW TEST:6.827 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:23:52.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:24:52.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-w6529" for this suite.
Dec 27 12:25:16.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:25:16.632: INFO: namespace: e2e-tests-container-probe-w6529, resource: bindings, ignored listing per whitelist
Dec 27 12:25:16.694: INFO: namespace e2e-tests-container-probe-w6529 deletion completed in 24.303784524s

• [SLOW TEST:84.615 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:25:16.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 27 12:25:16.944: INFO: Creating ReplicaSet my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005
Dec 27 12:25:17.030: INFO: Pod name my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005: Found 0 pods out of 1
Dec 27 12:25:22.045: INFO: Pod name my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005: Found 1 pods out of 1
Dec 27 12:25:22.045: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005" is running
Dec 27 12:25:28.069: INFO: Pod "my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005-xt7s7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-27 12:25:17 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-27 12:25:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-27 12:25:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-27 12:25:17 +0000 UTC Reason: Message:}])
Dec 27 12:25:28.069: INFO: Trying to dial the pod
Dec 27 12:25:33.112: INFO: Controller my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005: Got expected result from replica 1 [my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005-xt7s7]: "my-hostname-basic-f097410c-28a3-11ea-bad5-0242ac110005-xt7s7", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:25:33.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-4nfqd" for this suite.
Dec 27 12:25:41.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:25:41.203: INFO: namespace: e2e-tests-replicaset-4nfqd, resource: bindings, ignored listing per whitelist
Dec 27 12:25:41.351: INFO: namespace e2e-tests-replicaset-4nfqd deletion completed in 8.231223365s

• [SLOW TEST:24.657 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:25:41.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-nsqjm
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 27 12:25:42.264: INFO: Found 0 stateful pods, waiting for 3
Dec 27 12:25:52.278: INFO: Found 1 stateful pods, waiting for 3
Dec 27 12:26:02.282: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:26:02.282: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:26:02.282: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 27 12:26:12.276: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:26:12.276: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:26:12.276: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Dec 27 12:26:22.277: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:26:22.277: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:26:22.277: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 27 12:26:22.352: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 27 12:26:32.453: INFO: Updating stateful set ss2
Dec 27 12:26:32.495: INFO: Waiting for Pod e2e-tests-statefulset-nsqjm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 27 12:26:42.689: INFO: Waiting for Pod e2e-tests-statefulset-nsqjm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 27 12:26:53.094: INFO: Found 2 stateful pods, waiting for 3
Dec 27 12:27:03.105: INFO: Found 2 stateful pods, waiting for 3
Dec 27 12:27:13.148: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:27:13.148: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:27:13.148: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 27 12:27:23.113: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:27:23.113: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 27 12:27:23.113: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 27 12:27:23.161: INFO: Updating stateful set ss2
Dec 27 12:27:23.243: INFO: Waiting for Pod e2e-tests-statefulset-nsqjm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 27 12:27:33.342: INFO: Updating stateful set ss2
Dec 27 12:27:33.405: INFO: Waiting for StatefulSet e2e-tests-statefulset-nsqjm/ss2 to complete update
Dec 27 12:27:33.405: INFO: Waiting for Pod e2e-tests-statefulset-nsqjm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 27 12:27:43.437: INFO: Waiting for StatefulSet e2e-tests-statefulset-nsqjm/ss2 to complete update
Dec 27 12:27:43.437: INFO: Waiting for Pod e2e-tests-statefulset-nsqjm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 27 12:27:53.445: INFO: Waiting for StatefulSet e2e-tests-statefulset-nsqjm/ss2 to complete update
Dec 27 12:28:03.474: INFO: Waiting for StatefulSet e2e-tests-statefulset-nsqjm/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 27 12:28:13.457: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nsqjm
Dec 27 12:28:13.463: INFO: Scaling statefulset ss2 to 0
Dec 27 12:28:43.650: INFO: Waiting for statefulset status.replicas updated to 0
Dec 27 12:28:43.660: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:28:43.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-nsqjm" for this suite.
Dec 27 12:28:51.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:28:51.906: INFO: namespace: e2e-tests-statefulset-nsqjm, resource: bindings, ignored listing per whitelist
Dec 27 12:28:51.952: INFO: namespace e2e-tests-statefulset-nsqjm deletion completed in 8.220930181s

• [SLOW TEST:190.601 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:28:51.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-70ed6ffa-28a4-11ea-bad5-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 27 12:28:52.276: INFO: Waiting up to 5m0s for pod "pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005" in namespace "e2e-tests-configmap-c7kvc" to be "success or failure"
Dec 27 12:28:52.366: INFO: Pod "pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 90.20939ms
Dec 27 12:28:54.419: INFO: Pod "pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142546236s
Dec 27 12:28:56.441: INFO: Pod "pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165169993s
Dec 27 12:28:58.831: INFO: Pod "pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.554389489s
Dec 27 12:29:00.893: INFO: Pod "pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.616516366s
Dec 27 12:29:02.907: INFO: Pod "pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.631021347s
STEP: Saw pod success
Dec 27 12:29:02.907: INFO: Pod "pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:29:02.912: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 27 12:29:03.987: INFO: Waiting for pod pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005 to disappear
Dec 27 12:29:04.003: INFO: Pod pod-configmaps-70eec009-28a4-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:29:04.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-c7kvc" for this suite.
Dec 27 12:29:10.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:29:10.093: INFO: namespace: e2e-tests-configmap-c7kvc, resource: bindings, ignored listing per whitelist
Dec 27 12:29:10.202: INFO: namespace e2e-tests-configmap-c7kvc deletion completed in 6.190979761s

• [SLOW TEST:18.249 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:29:10.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-f5h76 in namespace e2e-tests-proxy-v9c9l
I1227 12:29:10.476820       8 runners.go:184] Created replication controller with name: proxy-service-f5h76, namespace: e2e-tests-proxy-v9c9l, replica count: 1
I1227 12:29:11.527475       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1227 12:29:12.527803       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1227 12:29:13.528165       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1227 12:29:14.528560       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1227 12:29:15.528992       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1227 12:29:16.529345       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1227 12:29:17.529631       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1227 12:29:18.529902       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1227 12:29:19.530225       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1227 12:29:20.530587       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1227 12:29:21.530875       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1227 12:29:22.531300       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1227 12:29:23.531654       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1227 12:29:24.532013       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1227 12:29:25.532462       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1227 12:29:26.532841       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1227 12:29:27.533142       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1227 12:29:28.533523       8 runners.go:184] proxy-service-f5h76 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 27 12:29:28.563: INFO: setup took 18.221975485s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 27 12:29:28.650: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-v9c9l/services/http:proxy-service-f5h76:portname2/proxy/: bar (200; 86.325003ms)
Dec 27 12:29:28.650: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-v9c9l/pods/proxy-service-f5h76-pwbpp:162/proxy/: bar (200; 86.17657ms)
Dec 27 12:29:28.650: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-v9c9l/pods/http:proxy-service-f5h76-pwbpp:160/proxy/: foo (200; 86.852473ms)
Dec 27 12:29:28.661: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-v9c9l/pods/http:proxy-service-f5h76-pwbpp:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 27 12:29:43.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-v6p4q" to be "success or failure"
Dec 27 12:29:43.828: INFO: Pod "downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 175.036596ms
Dec 27 12:29:45.852: INFO: Pod "downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19920247s
Dec 27 12:29:47.888: INFO: Pod "downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23486201s
Dec 27 12:29:50.031: INFO: Pod "downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378285524s
Dec 27 12:29:52.048: INFO: Pod "downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394786261s
Dec 27 12:29:54.070: INFO: Pod "downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.417282737s
STEP: Saw pod success
Dec 27 12:29:54.070: INFO: Pod "downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:29:54.081: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005 container client-container: 
STEP: delete the pod
Dec 27 12:29:54.371: INFO: Waiting for pod downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005 to disappear
Dec 27 12:29:54.385: INFO: Pod downwardapi-volume-8f88ca52-28a4-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:29:54.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v6p4q" for this suite.
Dec 27 12:30:00.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:30:00.934: INFO: namespace: e2e-tests-projected-v6p4q, resource: bindings, ignored listing per whitelist
Dec 27 12:30:01.020: INFO: namespace e2e-tests-projected-v6p4q deletion completed in 6.493937667s

• [SLOW TEST:17.604 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:30:01.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 27 12:30:21.893: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 27 12:30:21.905: INFO: Pod pod-with-poststart-http-hook still exists
Dec 27 12:30:23.905: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 27 12:30:23.949: INFO: Pod pod-with-poststart-http-hook still exists
Dec 27 12:30:25.905: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 27 12:30:25.938: INFO: Pod pod-with-poststart-http-hook still exists
Dec 27 12:30:27.905: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 27 12:30:27.923: INFO: Pod pod-with-poststart-http-hook still exists
Dec 27 12:30:29.905: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 27 12:30:29.924: INFO: Pod pod-with-poststart-http-hook still exists
Dec 27 12:30:31.905: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 27 12:30:31.923: INFO: Pod pod-with-poststart-http-hook still exists
Dec 27 12:30:33.905: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 27 12:30:33.946: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:30:33.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qf9ls" for this suite.
Dec 27 12:30:58.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:30:58.261: INFO: namespace: e2e-tests-container-lifecycle-hook-qf9ls, resource: bindings, ignored listing per whitelist
Dec 27 12:30:58.264: INFO: namespace e2e-tests-container-lifecycle-hook-qf9ls deletion completed in 24.302819735s

• [SLOW TEST:57.244 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:30:58.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vpcvq
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 27 12:30:58.490: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 27 12:31:32.866: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-vpcvq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 27 12:31:32.866: INFO: >>> kubeConfig: /root/.kube/config
Dec 27 12:31:33.352: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:31:33.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-vpcvq" for this suite.
Dec 27 12:31:57.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:31:57.536: INFO: namespace: e2e-tests-pod-network-test-vpcvq, resource: bindings, ignored listing per whitelist
Dec 27 12:31:57.594: INFO: namespace e2e-tests-pod-network-test-vpcvq deletion completed in 24.22548239s

• [SLOW TEST:59.330 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:31:57.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 27 12:31:59.184: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 27 12:31:59.215: INFO: Waiting for terminating namespaces to be deleted...
Dec 27 12:31:59.222: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 27 12:31:59.245: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:31:59.245: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 27 12:31:59.245: INFO: 	Container coredns ready: true, restart count 0
Dec 27 12:31:59.245: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 27 12:31:59.245: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 27 12:31:59.245: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:31:59.245: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 27 12:31:59.245: INFO: 	Container weave ready: true, restart count 0
Dec 27 12:31:59.245: INFO: 	Container weave-npc ready: true, restart count 0
Dec 27 12:31:59.245: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 27 12:31:59.245: INFO: 	Container coredns ready: true, restart count 0
Dec 27 12:31:59.245: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:31:59.245: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 27 12:31:59.343: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 27 12:31:59.343: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 27 12:31:59.343: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 27 12:31:59.343: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 27 12:31:59.343: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 27 12:31:59.343: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 27 12:31:59.343: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 27 12:31:59.343: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e0707ade-28a4-11ea-bad5-0242ac110005.15e43a58186e261d], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-gxqsh/filler-pod-e0707ade-28a4-11ea-bad5-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e0707ade-28a4-11ea-bad5-0242ac110005.15e43a592b10f07b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e0707ade-28a4-11ea-bad5-0242ac110005.15e43a59c4809ca3], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e0707ade-28a4-11ea-bad5-0242ac110005.15e43a59ee9479ce], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e43a5a6759ab3e], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:32:10.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-gxqsh" for this suite.
Dec 27 12:32:18.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:32:18.850: INFO: namespace: e2e-tests-sched-pred-gxqsh, resource: bindings, ignored listing per whitelist
Dec 27 12:32:18.864: INFO: namespace e2e-tests-sched-pred-gxqsh deletion completed in 8.266804549s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:21.269 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:32:18.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 27 12:32:19.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hb9ps'
Dec 27 12:32:20.563: INFO: stderr: ""
Dec 27 12:32:20.564: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 27 12:32:21.576: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:21.576: INFO: Found 0 / 1
Dec 27 12:32:22.580: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:22.580: INFO: Found 0 / 1
Dec 27 12:32:23.603: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:23.603: INFO: Found 0 / 1
Dec 27 12:32:24.583: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:24.583: INFO: Found 0 / 1
Dec 27 12:32:26.294: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:26.294: INFO: Found 0 / 1
Dec 27 12:32:26.788: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:26.788: INFO: Found 0 / 1
Dec 27 12:32:27.584: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:27.584: INFO: Found 0 / 1
Dec 27 12:32:28.603: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:28.603: INFO: Found 0 / 1
Dec 27 12:32:29.580: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:29.580: INFO: Found 1 / 1
Dec 27 12:32:29.580: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 27 12:32:29.587: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:29.587: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 27 12:32:29.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-2vf66 --namespace=e2e-tests-kubectl-hb9ps -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 27 12:32:29.776: INFO: stderr: ""
Dec 27 12:32:29.776: INFO: stdout: "pod/redis-master-2vf66 patched\n"
STEP: checking annotations
Dec 27 12:32:29.785: INFO: Selector matched 1 pods for map[app:redis]
Dec 27 12:32:29.785: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:32:29.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hb9ps" for this suite.
Dec 27 12:32:55.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:32:55.895: INFO: namespace: e2e-tests-kubectl-hb9ps, resource: bindings, ignored listing per whitelist
Dec 27 12:32:56.026: INFO: namespace e2e-tests-kubectl-hb9ps deletion completed in 26.23733047s

• [SLOW TEST:37.162 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:32:56.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 27 12:32:56.242: INFO: Waiting up to 5m0s for pod "var-expansion-0257b648-28a5-11ea-bad5-0242ac110005" in namespace "e2e-tests-var-expansion-6fqg7" to be "success or failure"
Dec 27 12:32:56.253: INFO: Pod "var-expansion-0257b648-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.353967ms
Dec 27 12:32:58.370: INFO: Pod "var-expansion-0257b648-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128032323s
Dec 27 12:33:00.397: INFO: Pod "var-expansion-0257b648-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155428245s
Dec 27 12:33:02.413: INFO: Pod "var-expansion-0257b648-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171699889s
Dec 27 12:33:04.436: INFO: Pod "var-expansion-0257b648-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194428354s
Dec 27 12:33:06.553: INFO: Pod "var-expansion-0257b648-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.311687597s
Dec 27 12:33:08.882: INFO: Pod "var-expansion-0257b648-28a5-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.640111126s
STEP: Saw pod success
Dec 27 12:33:08.882: INFO: Pod "var-expansion-0257b648-28a5-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:33:08.910: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-0257b648-28a5-11ea-bad5-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 27 12:33:08.994: INFO: Waiting for pod var-expansion-0257b648-28a5-11ea-bad5-0242ac110005 to disappear
Dec 27 12:33:09.000: INFO: Pod var-expansion-0257b648-28a5-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:33:09.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-6fqg7" for this suite.
Dec 27 12:33:15.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:33:15.115: INFO: namespace: e2e-tests-var-expansion-6fqg7, resource: bindings, ignored listing per whitelist
Dec 27 12:33:15.264: INFO: namespace e2e-tests-var-expansion-6fqg7 deletion completed in 6.25511033s

• [SLOW TEST:19.237 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:33:15.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 27 12:33:15.487: INFO: Waiting up to 5m0s for pod "client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005" in namespace "e2e-tests-containers-qlfq4" to be "success or failure"
Dec 27 12:33:15.514: INFO: Pod "client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.717351ms
Dec 27 12:33:17.526: INFO: Pod "client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039200378s
Dec 27 12:33:19.569: INFO: Pod "client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081554126s
Dec 27 12:33:21.586: INFO: Pod "client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098552717s
Dec 27 12:33:23.611: INFO: Pod "client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124307924s
Dec 27 12:33:25.692: INFO: Pod "client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.204842421s
STEP: Saw pod success
Dec 27 12:33:25.692: INFO: Pod "client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:33:25.699: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005 container test-container: 
STEP: delete the pod
Dec 27 12:33:26.010: INFO: Waiting for pod client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005 to disappear
Dec 27 12:33:26.038: INFO: Pod client-containers-0dd14ac1-28a5-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:33:26.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-qlfq4" for this suite.
Dec 27 12:33:32.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:33:32.206: INFO: namespace: e2e-tests-containers-qlfq4, resource: bindings, ignored listing per whitelist
Dec 27 12:33:32.260: INFO: namespace e2e-tests-containers-qlfq4 deletion completed in 6.212071199s

• [SLOW TEST:16.996 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:33:32.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-fv5j9
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-fv5j9
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-fv5j9
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-fv5j9
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-fv5j9
Dec 27 12:33:44.736: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-fv5j9, name: ss-0, uid: 1f01a76c-28a5-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 27 12:33:45.662: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-fv5j9, name: ss-0, uid: 1f01a76c-28a5-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 27 12:33:45.703: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-fv5j9, name: ss-0, uid: 1f01a76c-28a5-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 27 12:33:45.720: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-fv5j9
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-fv5j9
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-fv5j9 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 27 12:33:56.273: INFO: Deleting all statefulset in ns e2e-tests-statefulset-fv5j9
Dec 27 12:33:56.285: INFO: Scaling statefulset ss to 0
Dec 27 12:34:16.380: INFO: Waiting for statefulset status.replicas updated to 0
Dec 27 12:34:16.390: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:34:16.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-fv5j9" for this suite.
Dec 27 12:34:24.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:34:24.803: INFO: namespace: e2e-tests-statefulset-fv5j9, resource: bindings, ignored listing per whitelist
Dec 27 12:34:24.903: INFO: namespace e2e-tests-statefulset-fv5j9 deletion completed in 8.331377974s

• [SLOW TEST:52.642 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:34:24.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 27 12:34:25.208: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-a,UID:3760e1f4-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238908,Generation:0,CreationTimestamp:2019-12-27 12:34:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 27 12:34:25.208: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-a,UID:3760e1f4-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238908,Generation:0,CreationTimestamp:2019-12-27 12:34:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 27 12:34:35.226: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-a,UID:3760e1f4-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238920,Generation:0,CreationTimestamp:2019-12-27 12:34:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 27 12:34:35.226: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-a,UID:3760e1f4-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238920,Generation:0,CreationTimestamp:2019-12-27 12:34:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 27 12:34:45.257: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-a,UID:3760e1f4-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238933,Generation:0,CreationTimestamp:2019-12-27 12:34:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 27 12:34:45.257: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-a,UID:3760e1f4-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238933,Generation:0,CreationTimestamp:2019-12-27 12:34:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 27 12:34:55.293: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-a,UID:3760e1f4-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238947,Generation:0,CreationTimestamp:2019-12-27 12:34:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 27 12:34:55.294: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-a,UID:3760e1f4-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238947,Generation:0,CreationTimestamp:2019-12-27 12:34:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 27 12:35:05.311: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-b,UID:4f476dcb-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238959,Generation:0,CreationTimestamp:2019-12-27 12:35:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 27 12:35:05.311: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-b,UID:4f476dcb-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238959,Generation:0,CreationTimestamp:2019-12-27 12:35:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 27 12:35:15.337: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-b,UID:4f476dcb-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238972,Generation:0,CreationTimestamp:2019-12-27 12:35:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 27 12:35:15.338: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fczk5,SelfLink:/api/v1/namespaces/e2e-tests-watch-fczk5/configmaps/e2e-watch-test-configmap-b,UID:4f476dcb-28a5-11ea-a994-fa163e34d433,ResourceVersion:16238972,Generation:0,CreationTimestamp:2019-12-27 12:35:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:35:25.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-fczk5" for this suite.
Dec 27 12:35:31.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:35:31.483: INFO: namespace: e2e-tests-watch-fczk5, resource: bindings, ignored listing per whitelist
Dec 27 12:35:31.589: INFO: namespace e2e-tests-watch-fczk5 deletion completed in 6.199737319s

• [SLOW TEST:66.686 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:35:31.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 27 12:35:45.030: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:35:46.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-8dqtt" for this suite.
Dec 27 12:36:12.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:36:12.931: INFO: namespace: e2e-tests-replicaset-8dqtt, resource: bindings, ignored listing per whitelist
Dec 27 12:36:12.939: INFO: namespace e2e-tests-replicaset-8dqtt deletion completed in 26.686878271s

• [SLOW TEST:41.350 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:36:12.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-v67wl
Dec 27 12:36:23.238: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-v67wl
STEP: checking the pod's current state and verifying that restartCount is present
Dec 27 12:36:23.244: INFO: Initial restart count of pod liveness-http is 0
Dec 27 12:36:40.481: INFO: Restart count of pod e2e-tests-container-probe-v67wl/liveness-http is now 1 (17.237147678s elapsed)
Dec 27 12:36:58.724: INFO: Restart count of pod e2e-tests-container-probe-v67wl/liveness-http is now 2 (35.480447968s elapsed)
Dec 27 12:37:20.960: INFO: Restart count of pod e2e-tests-container-probe-v67wl/liveness-http is now 3 (57.716368911s elapsed)
Dec 27 12:37:39.196: INFO: Restart count of pod e2e-tests-container-probe-v67wl/liveness-http is now 4 (1m15.952160269s elapsed)
Dec 27 12:38:42.235: INFO: Restart count of pod e2e-tests-container-probe-v67wl/liveness-http is now 5 (2m18.990988855s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:38:42.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-v67wl" for this suite.
Dec 27 12:38:48.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:38:48.590: INFO: namespace: e2e-tests-container-probe-v67wl, resource: bindings, ignored listing per whitelist
Dec 27 12:38:48.816: INFO: namespace e2e-tests-container-probe-v67wl deletion completed in 6.494829694s

• [SLOW TEST:155.877 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:38:48.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-d4954811-28a5-11ea-bad5-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 27 12:38:49.079: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-zdhh8" to be "success or failure"
Dec 27 12:38:49.103: INFO: Pod "pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.276759ms
Dec 27 12:38:51.434: INFO: Pod "pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355139032s
Dec 27 12:38:53.445: INFO: Pod "pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366211061s
Dec 27 12:38:55.611: INFO: Pod "pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532053495s
Dec 27 12:38:57.623: INFO: Pod "pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543673942s
Dec 27 12:39:00.379: INFO: Pod "pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.299653974s
STEP: Saw pod success
Dec 27 12:39:00.379: INFO: Pod "pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:39:00.400: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 27 12:39:01.041: INFO: Waiting for pod pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005 to disappear
Dec 27 12:39:01.123: INFO: Pod pod-projected-configmaps-d496b74e-28a5-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:39:01.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zdhh8" for this suite.
Dec 27 12:39:07.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:39:07.334: INFO: namespace: e2e-tests-projected-zdhh8, resource: bindings, ignored listing per whitelist
Dec 27 12:39:07.339: INFO: namespace e2e-tests-projected-zdhh8 deletion completed in 6.200802849s

• [SLOW TEST:18.523 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:39:07.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 27 12:39:07.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-5sft8'
Dec 27 12:39:09.753: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 27 12:39:09.753: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 27 12:39:09.807: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 27 12:39:09.968: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 27 12:39:10.201: INFO: scanned /root for discovery docs: 
Dec 27 12:39:10.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-5sft8'
Dec 27 12:39:37.467: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 27 12:39:37.468: INFO: stdout: "Created e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a\nScaling up e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 27 12:39:37.468: INFO: stdout: "Created e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a\nScaling up e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 27 12:39:37.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5sft8'
Dec 27 12:39:37.635: INFO: stderr: ""
Dec 27 12:39:37.635: INFO: stdout: "e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a-8qppc e2e-test-nginx-rc-zhzlt "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 27 12:39:42.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5sft8'
Dec 27 12:39:42.872: INFO: stderr: ""
Dec 27 12:39:42.872: INFO: stdout: "e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a-8qppc "
Dec 27 12:39:42.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a-8qppc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5sft8'
Dec 27 12:39:42.983: INFO: stderr: ""
Dec 27 12:39:42.983: INFO: stdout: "true"
Dec 27 12:39:42.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a-8qppc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5sft8'
Dec 27 12:39:43.090: INFO: stderr: ""
Dec 27 12:39:43.090: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 27 12:39:43.090: INFO: e2e-test-nginx-rc-4b078e33be41bdf29cecd65f7789b16a-8qppc is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 27 12:39:43.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5sft8'
Dec 27 12:39:43.248: INFO: stderr: ""
Dec 27 12:39:43.248: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:39:43.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5sft8" for this suite.
Dec 27 12:40:07.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:40:07.456: INFO: namespace: e2e-tests-kubectl-5sft8, resource: bindings, ignored listing per whitelist
Dec 27 12:40:07.487: INFO: namespace e2e-tests-kubectl-5sft8 deletion completed in 24.230024707s

• [SLOW TEST:60.148 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:40:07.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 27 12:40:07.702: INFO: Waiting up to 5m0s for pod "pod-038390d5-28a6-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-g2rrw" to be "success or failure"
Dec 27 12:40:07.708: INFO: Pod "pod-038390d5-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.907738ms
Dec 27 12:40:09.777: INFO: Pod "pod-038390d5-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074377338s
Dec 27 12:40:11.791: INFO: Pod "pod-038390d5-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088892149s
Dec 27 12:40:14.330: INFO: Pod "pod-038390d5-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.627752507s
Dec 27 12:40:16.349: INFO: Pod "pod-038390d5-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.646433799s
Dec 27 12:40:18.386: INFO: Pod "pod-038390d5-28a6-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.683822323s
STEP: Saw pod success
Dec 27 12:40:18.386: INFO: Pod "pod-038390d5-28a6-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:40:18.394: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-038390d5-28a6-11ea-bad5-0242ac110005 container test-container: 
STEP: delete the pod
Dec 27 12:40:19.079: INFO: Waiting for pod pod-038390d5-28a6-11ea-bad5-0242ac110005 to disappear
Dec 27 12:40:19.101: INFO: Pod pod-038390d5-28a6-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:40:19.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-g2rrw" for this suite.
Dec 27 12:40:25.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:40:25.557: INFO: namespace: e2e-tests-emptydir-g2rrw, resource: bindings, ignored listing per whitelist
Dec 27 12:40:25.565: INFO: namespace e2e-tests-emptydir-g2rrw deletion completed in 6.453783843s

• [SLOW TEST:18.078 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:40:25.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 27 12:40:25.800: INFO: Waiting up to 5m0s for pod "downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-kh6qx" to be "success or failure"
Dec 27 12:40:25.869: INFO: Pod "downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 69.195018ms
Dec 27 12:40:27.921: INFO: Pod "downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121587455s
Dec 27 12:40:29.938: INFO: Pod "downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137855725s
Dec 27 12:40:32.223: INFO: Pod "downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423085348s
Dec 27 12:40:34.448: INFO: Pod "downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.64792988s
Dec 27 12:40:36.740: INFO: Pod "downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.940193291s
STEP: Saw pod success
Dec 27 12:40:36.740: INFO: Pod "downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:40:36.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 27 12:40:36.953: INFO: Waiting for pod downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005 to disappear
Dec 27 12:40:36.977: INFO: Pod downward-api-0e4befd9-28a6-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:40:36.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kh6qx" for this suite.
Dec 27 12:40:45.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:40:45.159: INFO: namespace: e2e-tests-downward-api-kh6qx, resource: bindings, ignored listing per whitelist
Dec 27 12:40:45.332: INFO: namespace e2e-tests-downward-api-kh6qx deletion completed in 8.33796044s

• [SLOW TEST:19.766 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:40:45.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 27 12:40:45.567: INFO: Number of nodes with available pods: 0
Dec 27 12:40:45.567: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:47.275: INFO: Number of nodes with available pods: 0
Dec 27 12:40:47.275: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:47.763: INFO: Number of nodes with available pods: 0
Dec 27 12:40:47.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:48.699: INFO: Number of nodes with available pods: 0
Dec 27 12:40:48.699: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:49.587: INFO: Number of nodes with available pods: 0
Dec 27 12:40:49.587: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:50.603: INFO: Number of nodes with available pods: 0
Dec 27 12:40:50.603: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:52.465: INFO: Number of nodes with available pods: 0
Dec 27 12:40:52.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:53.050: INFO: Number of nodes with available pods: 0
Dec 27 12:40:53.050: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:53.731: INFO: Number of nodes with available pods: 0
Dec 27 12:40:53.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:54.675: INFO: Number of nodes with available pods: 0
Dec 27 12:40:54.675: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:55.710: INFO: Number of nodes with available pods: 0
Dec 27 12:40:55.710: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:56.665: INFO: Number of nodes with available pods: 1
Dec 27 12:40:56.665: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 27 12:40:56.875: INFO: Number of nodes with available pods: 0
Dec 27 12:40:56.875: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:57.947: INFO: Number of nodes with available pods: 0
Dec 27 12:40:57.948: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:40:58.942: INFO: Number of nodes with available pods: 0
Dec 27 12:40:58.942: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:00.057: INFO: Number of nodes with available pods: 0
Dec 27 12:41:00.057: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:01.068: INFO: Number of nodes with available pods: 0
Dec 27 12:41:01.068: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:01.897: INFO: Number of nodes with available pods: 0
Dec 27 12:41:01.897: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:02.895: INFO: Number of nodes with available pods: 0
Dec 27 12:41:02.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:03.894: INFO: Number of nodes with available pods: 0
Dec 27 12:41:03.894: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:04.926: INFO: Number of nodes with available pods: 0
Dec 27 12:41:04.926: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:05.893: INFO: Number of nodes with available pods: 0
Dec 27 12:41:05.893: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:06.900: INFO: Number of nodes with available pods: 0
Dec 27 12:41:06.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:07.963: INFO: Number of nodes with available pods: 0
Dec 27 12:41:07.963: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:08.907: INFO: Number of nodes with available pods: 0
Dec 27 12:41:08.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:09.905: INFO: Number of nodes with available pods: 0
Dec 27 12:41:09.905: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:10.911: INFO: Number of nodes with available pods: 0
Dec 27 12:41:10.911: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:11.902: INFO: Number of nodes with available pods: 0
Dec 27 12:41:11.902: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:13.025: INFO: Number of nodes with available pods: 0
Dec 27 12:41:13.025: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:14.669: INFO: Number of nodes with available pods: 0
Dec 27 12:41:14.669: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:14.924: INFO: Number of nodes with available pods: 0
Dec 27 12:41:14.924: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:16.052: INFO: Number of nodes with available pods: 0
Dec 27 12:41:16.053: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:16.904: INFO: Number of nodes with available pods: 0
Dec 27 12:41:16.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:17.913: INFO: Number of nodes with available pods: 0
Dec 27 12:41:17.914: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:19.129: INFO: Number of nodes with available pods: 0
Dec 27 12:41:19.129: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:19.895: INFO: Number of nodes with available pods: 0
Dec 27 12:41:19.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:20.928: INFO: Number of nodes with available pods: 0
Dec 27 12:41:20.928: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:21.900: INFO: Number of nodes with available pods: 0
Dec 27 12:41:21.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 27 12:41:22.924: INFO: Number of nodes with available pods: 1
Dec 27 12:41:22.924: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-m2m77, will wait for the garbage collector to delete the pods
Dec 27 12:41:23.010: INFO: Deleting DaemonSet.extensions daemon-set took: 23.475914ms
Dec 27 12:41:23.111: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.294032ms
Dec 27 12:41:32.663: INFO: Number of nodes with available pods: 0
Dec 27 12:41:32.663: INFO: Number of running nodes: 0, number of available pods: 0
Dec 27 12:41:32.792: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-m2m77/daemonsets","resourceVersion":"16239714"},"items":null}

Dec 27 12:41:32.805: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-m2m77/pods","resourceVersion":"16239714"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:41:32.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-m2m77" for this suite.
Dec 27 12:41:40.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:41:41.186: INFO: namespace: e2e-tests-daemonsets-m2m77, resource: bindings, ignored listing per whitelist
Dec 27 12:41:41.221: INFO: namespace e2e-tests-daemonsets-m2m77 deletion completed in 8.376675157s

• [SLOW TEST:55.889 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:41:41.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 27 12:41:41.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-hpjpg" to be "success or failure"
Dec 27 12:41:41.430: INFO: Pod "downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.567851ms
Dec 27 12:41:43.445: INFO: Pod "downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018395971s
Dec 27 12:41:45.455: INFO: Pod "downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02887351s
Dec 27 12:41:47.561: INFO: Pod "downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134318706s
Dec 27 12:41:49.574: INFO: Pod "downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147747807s
Dec 27 12:41:51.590: INFO: Pod "downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.163189877s
Dec 27 12:41:53.653: INFO: Pod "downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.226194066s
STEP: Saw pod success
Dec 27 12:41:53.653: INFO: Pod "downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:41:53.662: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005 container client-container: 
STEP: delete the pod
Dec 27 12:41:53.982: INFO: Waiting for pod downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005 to disappear
Dec 27 12:41:53.997: INFO: Pod downwardapi-volume-3b610176-28a6-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:41:53.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hpjpg" for this suite.
Dec 27 12:42:00.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:42:00.336: INFO: namespace: e2e-tests-projected-hpjpg, resource: bindings, ignored listing per whitelist
Dec 27 12:42:00.360: INFO: namespace e2e-tests-projected-hpjpg deletion completed in 6.352557606s

• [SLOW TEST:19.140 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:42:00.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1227 12:42:46.092871       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 27 12:42:46.092: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:42:46.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-q44d7" for this suite.
Dec 27 12:43:07.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:43:07.423: INFO: namespace: e2e-tests-gc-q44d7, resource: bindings, ignored listing per whitelist
Dec 27 12:43:08.207: INFO: namespace e2e-tests-gc-q44d7 deletion completed in 22.107887766s

• [SLOW TEST:67.846 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:43:08.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 27 12:43:09.045: INFO: Waiting up to 5m0s for pod "downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-6vcnh" to be "success or failure"
Dec 27 12:43:09.185: INFO: Pod "downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 140.54298ms
Dec 27 12:43:11.198: INFO: Pod "downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153641092s
Dec 27 12:43:13.209: INFO: Pod "downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1642072s
Dec 27 12:43:15.517: INFO: Pod "downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.472018349s
Dec 27 12:43:17.535: INFO: Pod "downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490480425s
Dec 27 12:43:19.549: INFO: Pod "downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504248289s
STEP: Saw pod success
Dec 27 12:43:19.549: INFO: Pod "downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:43:19.554: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 27 12:43:19.714: INFO: Waiting for pod downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005 to disappear
Dec 27 12:43:19.744: INFO: Pod downward-api-6f80ae09-28a6-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:43:19.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6vcnh" for this suite.
Dec 27 12:43:25.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:43:26.022: INFO: namespace: e2e-tests-downward-api-6vcnh, resource: bindings, ignored listing per whitelist
Dec 27 12:43:26.130: INFO: namespace e2e-tests-downward-api-6vcnh deletion completed in 6.26541491s

• [SLOW TEST:17.923 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:43:26.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-xcr2
STEP: Creating a pod to test atomic-volume-subpath
Dec 27 12:43:26.449: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xcr2" in namespace "e2e-tests-subpath-7jkrr" to be "success or failure"
Dec 27 12:43:26.485: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.220414ms
Dec 27 12:43:28.695: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245516183s
Dec 27 12:43:30.713: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263400131s
Dec 27 12:43:32.985: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535769379s
Dec 27 12:43:35.003: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553626964s
Dec 27 12:43:37.013: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.563819604s
Dec 27 12:43:39.041: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.59179629s
Dec 27 12:43:41.067: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.617379375s
Dec 27 12:43:43.083: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Running", Reason="", readiness=false. Elapsed: 16.633612881s
Dec 27 12:43:45.105: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Running", Reason="", readiness=false. Elapsed: 18.656132453s
Dec 27 12:43:47.130: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Running", Reason="", readiness=false. Elapsed: 20.680983591s
Dec 27 12:43:49.141: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Running", Reason="", readiness=false. Elapsed: 22.69215739s
Dec 27 12:43:51.160: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Running", Reason="", readiness=false. Elapsed: 24.710340475s
Dec 27 12:43:53.171: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Running", Reason="", readiness=false. Elapsed: 26.721833129s
Dec 27 12:43:55.185: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Running", Reason="", readiness=false. Elapsed: 28.736132051s
Dec 27 12:43:57.200: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Running", Reason="", readiness=false. Elapsed: 30.750854107s
Dec 27 12:43:59.777: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Running", Reason="", readiness=false. Elapsed: 33.327478491s
Dec 27 12:44:01.795: INFO: Pod "pod-subpath-test-configmap-xcr2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.345444734s
STEP: Saw pod success
Dec 27 12:44:01.795: INFO: Pod "pod-subpath-test-configmap-xcr2" satisfied condition "success or failure"
Dec 27 12:44:01.813: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-xcr2 container test-container-subpath-configmap-xcr2: 
STEP: delete the pod
Dec 27 12:44:02.038: INFO: Waiting for pod pod-subpath-test-configmap-xcr2 to disappear
Dec 27 12:44:02.085: INFO: Pod pod-subpath-test-configmap-xcr2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xcr2
Dec 27 12:44:02.085: INFO: Deleting pod "pod-subpath-test-configmap-xcr2" in namespace "e2e-tests-subpath-7jkrr"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:44:03.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-7jkrr" for this suite.
Dec 27 12:44:11.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:44:11.460: INFO: namespace: e2e-tests-subpath-7jkrr, resource: bindings, ignored listing per whitelist
Dec 27 12:44:11.521: INFO: namespace e2e-tests-subpath-7jkrr deletion completed in 8.385110708s

• [SLOW TEST:45.391 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:44:11.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 27 12:44:11.741: INFO: Waiting up to 5m0s for pod "pod-94f79c96-28a6-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-mwvff" to be "success or failure"
Dec 27 12:44:11.746: INFO: Pod "pod-94f79c96-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.151386ms
Dec 27 12:44:13.898: INFO: Pod "pod-94f79c96-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156856138s
Dec 27 12:44:15.920: INFO: Pod "pod-94f79c96-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178693913s
Dec 27 12:44:18.179: INFO: Pod "pod-94f79c96-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437715774s
Dec 27 12:44:20.190: INFO: Pod "pod-94f79c96-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.448920781s
Dec 27 12:44:22.211: INFO: Pod "pod-94f79c96-28a6-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.469794033s
STEP: Saw pod success
Dec 27 12:44:22.211: INFO: Pod "pod-94f79c96-28a6-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:44:22.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-94f79c96-28a6-11ea-bad5-0242ac110005 container test-container: 
STEP: delete the pod
Dec 27 12:44:22.948: INFO: Waiting for pod pod-94f79c96-28a6-11ea-bad5-0242ac110005 to disappear
Dec 27 12:44:22.963: INFO: Pod pod-94f79c96-28a6-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:44:22.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mwvff" for this suite.
Dec 27 12:44:29.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:44:29.358: INFO: namespace: e2e-tests-emptydir-mwvff, resource: bindings, ignored listing per whitelist
Dec 27 12:44:29.496: INFO: namespace e2e-tests-emptydir-mwvff deletion completed in 6.519352463s

• [SLOW TEST:17.975 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:44:29.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 27 12:44:29.835: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 27 12:44:29.850: INFO: Waiting for terminating namespaces to be deleted...
Dec 27 12:44:29.853: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 27 12:44:29.870: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 27 12:44:29.870: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 27 12:44:29.870: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:44:29.870: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 27 12:44:29.870: INFO: 	Container weave ready: true, restart count 0
Dec 27 12:44:29.870: INFO: 	Container weave-npc ready: true, restart count 0
Dec 27 12:44:29.870: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 27 12:44:29.870: INFO: 	Container coredns ready: true, restart count 0
Dec 27 12:44:29.870: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:44:29.870: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:44:29.870: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:44:29.870: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 27 12:44:29.870: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e43b06ce41c386], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:44:30.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-jmln4" for this suite.
Dec 27 12:44:36.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:44:36.977: INFO: namespace: e2e-tests-sched-pred-jmln4, resource: bindings, ignored listing per whitelist
Dec 27 12:44:37.082: INFO: namespace e2e-tests-sched-pred-jmln4 deletion completed in 6.164591586s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.585 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:44:37.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 27 12:44:37.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-ntssf'
Dec 27 12:44:37.384: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 27 12:44:37.384: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 27 12:44:41.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-ntssf'
Dec 27 12:44:41.563: INFO: stderr: ""
Dec 27 12:44:41.563: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:44:41.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ntssf" for this suite.
Dec 27 12:44:47.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:44:47.650: INFO: namespace: e2e-tests-kubectl-ntssf, resource: bindings, ignored listing per whitelist
Dec 27 12:44:47.781: INFO: namespace e2e-tests-kubectl-ntssf deletion completed in 6.208353272s

• [SLOW TEST:10.699 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:44:47.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 27 12:44:47.951: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 27 12:44:48.059: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 27 12:44:53.488: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 27 12:44:57.635: INFO: Creating deployment "test-rolling-update-deployment"
Dec 27 12:44:57.665: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 27 12:44:57.678: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 27 12:44:59.935: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 27 12:45:00.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047498, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 27 12:45:02.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047498, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 27 12:45:04.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047498, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 27 12:45:06.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047498, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 27 12:45:08.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047498, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713047497, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 27 12:45:10.743: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 27 12:45:10.805: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-6ptc4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6ptc4/deployments/test-rolling-update-deployment,UID:b05813cc-28a6-11ea-a994-fa163e34d433,ResourceVersion:16240367,Generation:1,CreationTimestamp:2019-12-27 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-27 12:44:57 +0000 UTC 2019-12-27 12:44:57 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-27 12:45:09 +0000 UTC 2019-12-27 12:44:57 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 27 12:45:10.817: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-6ptc4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6ptc4/replicasets/test-rolling-update-deployment-75db98fb4c,UID:b05fc443-28a6-11ea-a994-fa163e34d433,ResourceVersion:16240356,Generation:1,CreationTimestamp:2019-12-27 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b05813cc-28a6-11ea-a994-fa163e34d433 0xc002711937 0xc002711938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 27 12:45:10.817: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 27 12:45:10.818: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-6ptc4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6ptc4/replicasets/test-rolling-update-controller,UID:aa917501-28a6-11ea-a994-fa163e34d433,ResourceVersion:16240366,Generation:2,CreationTimestamp:2019-12-27 12:44:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b05813cc-28a6-11ea-a994-fa163e34d433 0xc0027117e7 0xc0027117e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 27 12:45:10.827: INFO: Pod "test-rolling-update-deployment-75db98fb4c-nsr2h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-nsr2h,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-6ptc4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6ptc4/pods/test-rolling-update-deployment-75db98fb4c-nsr2h,UID:b063393b-28a6-11ea-a994-fa163e34d433,ResourceVersion:16240355,Generation:0,CreationTimestamp:2019-12-27 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c b05fc443-28a6-11ea-a994-fa163e34d433 0xc0022ce687 0xc0022ce688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gxmts {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gxmts,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-gxmts true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022ce6f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022ce710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:44:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:45:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:45:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 12:44:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-27 12:44:57 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-27 12:45:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9ea550adcc59c335255d1a088d87e7405d57edc129c78961e2574333bb840222}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:45:10.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6ptc4" for this suite.
Dec 27 12:45:18.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:45:19.043: INFO: namespace: e2e-tests-deployment-6ptc4, resource: bindings, ignored listing per whitelist
Dec 27 12:45:19.074: INFO: namespace e2e-tests-deployment-6ptc4 deletion completed in 8.231624718s

• [SLOW TEST:31.292 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:45:19.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 27 12:45:20.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-qbvzz" to be "success or failure"
Dec 27 12:45:20.453: INFO: Pod "downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 225.495764ms
Dec 27 12:45:22.795: INFO: Pod "downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.567285222s
Dec 27 12:45:24.808: INFO: Pod "downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.580295686s
Dec 27 12:45:27.473: INFO: Pod "downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.245034008s
Dec 27 12:45:29.500: INFO: Pod "downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.272282693s
Dec 27 12:45:31.518: INFO: Pod "downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.289970368s
STEP: Saw pod success
Dec 27 12:45:31.518: INFO: Pod "downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:45:31.526: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005 container client-container: 
STEP: delete the pod
Dec 27 12:45:32.066: INFO: Waiting for pod downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005 to disappear
Dec 27 12:45:32.077: INFO: Pod downwardapi-volume-bd419c41-28a6-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:45:32.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qbvzz" for this suite.
Dec 27 12:45:38.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:45:38.356: INFO: namespace: e2e-tests-projected-qbvzz, resource: bindings, ignored listing per whitelist
Dec 27 12:45:38.412: INFO: namespace e2e-tests-projected-qbvzz deletion completed in 6.327726061s

• [SLOW TEST:19.338 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:45:38.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 27 12:45:38.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xvslm'
Dec 27 12:45:38.899: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 27 12:45:38.899: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Dec 27 12:45:38.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-xvslm'
Dec 27 12:45:39.206: INFO: stderr: ""
Dec 27 12:45:39.206: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:45:39.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xvslm" for this suite.
Dec 27 12:45:45.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:45:45.435: INFO: namespace: e2e-tests-kubectl-xvslm, resource: bindings, ignored listing per whitelist
Dec 27 12:45:45.489: INFO: namespace e2e-tests-kubectl-xvslm deletion completed in 6.24432494s

• [SLOW TEST:7.077 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:45:45.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 27 12:45:45.653: INFO: Waiting up to 5m0s for pod "client-containers-ccf39483-28a6-11ea-bad5-0242ac110005" in namespace "e2e-tests-containers-gjcwq" to be "success or failure"
Dec 27 12:45:45.663: INFO: Pod "client-containers-ccf39483-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.915532ms
Dec 27 12:45:47.705: INFO: Pod "client-containers-ccf39483-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051773956s
Dec 27 12:45:49.723: INFO: Pod "client-containers-ccf39483-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06939029s
Dec 27 12:45:51.836: INFO: Pod "client-containers-ccf39483-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182637377s
Dec 27 12:45:53.850: INFO: Pod "client-containers-ccf39483-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196926913s
Dec 27 12:45:56.084: INFO: Pod "client-containers-ccf39483-28a6-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.430279392s
STEP: Saw pod success
Dec 27 12:45:56.084: INFO: Pod "client-containers-ccf39483-28a6-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:45:56.095: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ccf39483-28a6-11ea-bad5-0242ac110005 container test-container: 
STEP: delete the pod
Dec 27 12:45:56.508: INFO: Waiting for pod client-containers-ccf39483-28a6-11ea-bad5-0242ac110005 to disappear
Dec 27 12:45:56.525: INFO: Pod client-containers-ccf39483-28a6-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:45:56.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-gjcwq" for this suite.
Dec 27 12:46:02.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:46:02.721: INFO: namespace: e2e-tests-containers-gjcwq, resource: bindings, ignored listing per whitelist
Dec 27 12:46:02.787: INFO: namespace e2e-tests-containers-gjcwq deletion completed in 6.249665177s

• [SLOW TEST:17.298 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:46:02.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d75e9ddd-28a6-11ea-bad5-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 27 12:46:03.276: INFO: Waiting up to 5m0s for pod "pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005" in namespace "e2e-tests-secrets-fhgxl" to be "success or failure"
Dec 27 12:46:03.299: INFO: Pod "pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.640057ms
Dec 27 12:46:05.387: INFO: Pod "pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110309135s
Dec 27 12:46:07.398: INFO: Pod "pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121680828s
Dec 27 12:46:09.412: INFO: Pod "pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135278414s
Dec 27 12:46:11.429: INFO: Pod "pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152635699s
Dec 27 12:46:13.452: INFO: Pod "pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.175082198s
Dec 27 12:46:15.472: INFO: Pod "pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.195230738s
STEP: Saw pod success
Dec 27 12:46:15.472: INFO: Pod "pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:46:15.487: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 27 12:46:15.895: INFO: Waiting for pod pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005 to disappear
Dec 27 12:46:15.921: INFO: Pod pod-secrets-d7602f9b-28a6-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:46:15.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-fhgxl" for this suite.
Dec 27 12:46:22.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:46:22.153: INFO: namespace: e2e-tests-secrets-fhgxl, resource: bindings, ignored listing per whitelist
Dec 27 12:46:22.243: INFO: namespace e2e-tests-secrets-fhgxl deletion completed in 6.310910075s

• [SLOW TEST:19.455 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:46:22.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 27 12:46:22.701: INFO: Waiting up to 5m0s for pod "downward-api-e3065671-28a6-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-8wbg2" to be "success or failure"
Dec 27 12:46:22.900: INFO: Pod "downward-api-e3065671-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 199.18186ms
Dec 27 12:46:25.687: INFO: Pod "downward-api-e3065671-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.986128996s
Dec 27 12:46:27.706: INFO: Pod "downward-api-e3065671-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.004695175s
Dec 27 12:46:29.733: INFO: Pod "downward-api-e3065671-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.032442446s
Dec 27 12:46:31.750: INFO: Pod "downward-api-e3065671-28a6-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.048920885s
Dec 27 12:46:33.767: INFO: Pod "downward-api-e3065671-28a6-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.066003672s
STEP: Saw pod success
Dec 27 12:46:33.767: INFO: Pod "downward-api-e3065671-28a6-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:46:33.788: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e3065671-28a6-11ea-bad5-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 27 12:46:34.155: INFO: Waiting for pod downward-api-e3065671-28a6-11ea-bad5-0242ac110005 to disappear
Dec 27 12:46:34.193: INFO: Pod downward-api-e3065671-28a6-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:46:34.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8wbg2" for this suite.
Dec 27 12:46:40.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:46:40.422: INFO: namespace: e2e-tests-downward-api-8wbg2, resource: bindings, ignored listing per whitelist
Dec 27 12:46:40.796: INFO: namespace e2e-tests-downward-api-8wbg2 deletion completed in 6.5789988s

• [SLOW TEST:18.553 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:46:40.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 27 12:46:40.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-qt6qt run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 27 12:46:51.569: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 27 12:46:51.569: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:46:53.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qt6qt" for this suite.
Dec 27 12:47:00.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:47:00.638: INFO: namespace: e2e-tests-kubectl-qt6qt, resource: bindings, ignored listing per whitelist
Dec 27 12:47:00.759: INFO: namespace e2e-tests-kubectl-qt6qt deletion completed in 6.654187733s

• [SLOW TEST:19.962 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:47:00.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 27 12:47:00.949: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 27 12:47:01.062: INFO: Waiting for terminating namespaces to be deleted...
Dec 27 12:47:01.080: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 27 12:47:01.099: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 27 12:47:01.099: INFO: 	Container coredns ready: true, restart count 0
Dec 27 12:47:01.099: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:47:01.099: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:47:01.099: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:47:01.099: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 27 12:47:01.099: INFO: 	Container coredns ready: true, restart count 0
Dec 27 12:47:01.099: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 27 12:47:01.099: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 27 12:47:01.099: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 27 12:47:01.099: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 27 12:47:01.099: INFO: 	Container weave ready: true, restart count 0
Dec 27 12:47:01.099: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-fff9fa4d-28a6-11ea-bad5-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-fff9fa4d-28a6-11ea-bad5-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-fff9fa4d-28a6-11ea-bad5-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:47:23.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-b7fdz" for this suite.
Dec 27 12:47:45.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:47:45.681: INFO: namespace: e2e-tests-sched-pred-b7fdz, resource: bindings, ignored listing per whitelist
Dec 27 12:47:45.715: INFO: namespace e2e-tests-sched-pred-b7fdz deletion completed in 22.241112922s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:44.957 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:47:45.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-b79zq
Dec 27 12:47:54.155: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-b79zq
STEP: checking the pod's current state and verifying that restartCount is present
Dec 27 12:47:54.163: INFO: Initial restart count of pod liveness-exec is 0
Dec 27 12:48:53.804: INFO: Restart count of pod e2e-tests-container-probe-b79zq/liveness-exec is now 1 (59.641219917s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:48:53.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-b79zq" for this suite.
Dec 27 12:49:01.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:49:02.073: INFO: namespace: e2e-tests-container-probe-b79zq, resource: bindings, ignored listing per whitelist
Dec 27 12:49:02.104: INFO: namespace e2e-tests-container-probe-b79zq deletion completed in 8.196916572s

• [SLOW TEST:76.388 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:49:02.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 27 12:49:02.328: INFO: Waiting up to 5m0s for pod "pod-422dc2f0-28a7-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-27fk5" to be "success or failure"
Dec 27 12:49:02.355: INFO: Pod "pod-422dc2f0-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.928398ms
Dec 27 12:49:04.437: INFO: Pod "pod-422dc2f0-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108295534s
Dec 27 12:49:06.450: INFO: Pod "pod-422dc2f0-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121299824s
Dec 27 12:49:08.519: INFO: Pod "pod-422dc2f0-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190554463s
Dec 27 12:49:10.532: INFO: Pod "pod-422dc2f0-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203309463s
Dec 27 12:49:12.598: INFO: Pod "pod-422dc2f0-28a7-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.26923266s
STEP: Saw pod success
Dec 27 12:49:12.598: INFO: Pod "pod-422dc2f0-28a7-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:49:12.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-422dc2f0-28a7-11ea-bad5-0242ac110005 container test-container: 
STEP: delete the pod
Dec 27 12:49:13.054: INFO: Waiting for pod pod-422dc2f0-28a7-11ea-bad5-0242ac110005 to disappear
Dec 27 12:49:13.095: INFO: Pod pod-422dc2f0-28a7-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:49:13.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-27fk5" for this suite.
Dec 27 12:49:19.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:49:19.318: INFO: namespace: e2e-tests-emptydir-27fk5, resource: bindings, ignored listing per whitelist
Dec 27 12:49:19.354: INFO: namespace e2e-tests-emptydir-27fk5 deletion completed in 6.246985374s

• [SLOW TEST:17.250 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:49:19.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jhhq8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 27 12:49:19.558: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 27 12:49:51.757: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-jhhq8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 27 12:49:51.757: INFO: >>> kubeConfig: /root/.kube/config
Dec 27 12:49:52.388: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:49:52.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-jhhq8" for this suite.
Dec 27 12:50:16.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:50:16.555: INFO: namespace: e2e-tests-pod-network-test-jhhq8, resource: bindings, ignored listing per whitelist
Dec 27 12:50:17.144: INFO: namespace e2e-tests-pod-network-test-jhhq8 deletion completed in 24.733258833s

• [SLOW TEST:57.790 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:50:17.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6ee95cd5-28a7-11ea-bad5-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 27 12:50:17.505: INFO: Waiting up to 5m0s for pod "pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005" in namespace "e2e-tests-secrets-ftj8b" to be "success or failure"
Dec 27 12:50:17.517: INFO: Pod "pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.796429ms
Dec 27 12:50:19.539: INFO: Pod "pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03477389s
Dec 27 12:50:21.558: INFO: Pod "pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053346057s
Dec 27 12:50:23.816: INFO: Pod "pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311598193s
Dec 27 12:50:25.832: INFO: Pod "pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.327064653s
Dec 27 12:50:28.432: INFO: Pod "pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.927182747s
STEP: Saw pod success
Dec 27 12:50:28.432: INFO: Pod "pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:50:28.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 27 12:50:28.643: INFO: Waiting for pod pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005 to disappear
Dec 27 12:50:28.656: INFO: Pod pod-secrets-6efb9dbb-28a7-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:50:28.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ftj8b" for this suite.
Dec 27 12:50:34.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:50:34.872: INFO: namespace: e2e-tests-secrets-ftj8b, resource: bindings, ignored listing per whitelist
Dec 27 12:50:34.931: INFO: namespace e2e-tests-secrets-ftj8b deletion completed in 6.266511737s

• [SLOW TEST:17.787 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:50:34.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-dw7z
STEP: Creating a pod to test atomic-volume-subpath
Dec 27 12:50:35.285: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dw7z" in namespace "e2e-tests-subpath-9x4tk" to be "success or failure"
Dec 27 12:50:35.293: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Pending", Reason="", readiness=false. Elapsed: 7.834688ms
Dec 27 12:50:37.305: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02038852s
Dec 27 12:50:39.328: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042836271s
Dec 27 12:50:41.450: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165058517s
Dec 27 12:50:43.772: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487555919s
Dec 27 12:50:45.792: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.507290905s
Dec 27 12:50:48.195: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.910460785s
Dec 27 12:50:50.218: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Pending", Reason="", readiness=false. Elapsed: 14.932837568s
Dec 27 12:50:52.230: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Running", Reason="", readiness=false. Elapsed: 16.94498754s
Dec 27 12:50:54.248: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Running", Reason="", readiness=false. Elapsed: 18.962772713s
Dec 27 12:50:56.269: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Running", Reason="", readiness=false. Elapsed: 20.984062858s
Dec 27 12:50:58.283: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Running", Reason="", readiness=false. Elapsed: 22.998434313s
Dec 27 12:51:00.301: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Running", Reason="", readiness=false. Elapsed: 25.01616249s
Dec 27 12:51:02.320: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Running", Reason="", readiness=false. Elapsed: 27.034782139s
Dec 27 12:51:04.335: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Running", Reason="", readiness=false. Elapsed: 29.050231162s
Dec 27 12:51:06.373: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Running", Reason="", readiness=false. Elapsed: 31.088127327s
Dec 27 12:51:08.383: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Running", Reason="", readiness=false. Elapsed: 33.098153531s
Dec 27 12:51:10.781: INFO: Pod "pod-subpath-test-secret-dw7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.496163826s
STEP: Saw pod success
Dec 27 12:51:10.781: INFO: Pod "pod-subpath-test-secret-dw7z" satisfied condition "success or failure"
Dec 27 12:51:11.054: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-dw7z container test-container-subpath-secret-dw7z: 
STEP: delete the pod
Dec 27 12:51:11.527: INFO: Waiting for pod pod-subpath-test-secret-dw7z to disappear
Dec 27 12:51:11.552: INFO: Pod pod-subpath-test-secret-dw7z no longer exists
STEP: Deleting pod pod-subpath-test-secret-dw7z
Dec 27 12:51:11.552: INFO: Deleting pod "pod-subpath-test-secret-dw7z" in namespace "e2e-tests-subpath-9x4tk"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:51:11.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-9x4tk" for this suite.
Dec 27 12:51:17.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:51:17.910: INFO: namespace: e2e-tests-subpath-9x4tk, resource: bindings, ignored listing per whitelist
Dec 27 12:51:17.931: INFO: namespace e2e-tests-subpath-9x4tk deletion completed in 6.363378163s

• [SLOW TEST:43.000 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:51:17.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 27 12:51:18.138: INFO: Waiting up to 5m0s for pod "downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-9vzxx" to be "success or failure"
Dec 27 12:51:18.154: INFO: Pod "downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.183602ms
Dec 27 12:51:20.218: INFO: Pod "downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079757495s
Dec 27 12:51:22.248: INFO: Pod "downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110553112s
Dec 27 12:51:24.282: INFO: Pod "downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14436394s
Dec 27 12:51:26.294: INFO: Pod "downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156220524s
Dec 27 12:51:28.309: INFO: Pod "downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.171094099s
STEP: Saw pod success
Dec 27 12:51:28.309: INFO: Pod "downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 12:51:28.316: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005 container client-container: 
STEP: delete the pod
Dec 27 12:51:28.458: INFO: Waiting for pod downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005 to disappear
Dec 27 12:51:29.263: INFO: Pod downwardapi-volume-932042ca-28a7-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:51:29.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9vzxx" for this suite.
Dec 27 12:51:35.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:51:35.697: INFO: namespace: e2e-tests-downward-api-9vzxx, resource: bindings, ignored listing per whitelist
Dec 27 12:51:35.831: INFO: namespace e2e-tests-downward-api-9vzxx deletion completed in 6.543324001s

• [SLOW TEST:17.899 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:51:35.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 27 12:51:37.339: INFO: Pod name wrapped-volume-race-9e840302-28a7-11ea-bad5-0242ac110005: Found 0 pods out of 5
Dec 27 12:51:42.398: INFO: Pod name wrapped-volume-race-9e840302-28a7-11ea-bad5-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-9e840302-28a7-11ea-bad5-0242ac110005 in namespace e2e-tests-emptydir-wrapper-6g2fh, will wait for the garbage collector to delete the pods
Dec 27 12:53:24.659: INFO: Deleting ReplicationController wrapped-volume-race-9e840302-28a7-11ea-bad5-0242ac110005 took: 73.746647ms
Dec 27 12:53:25.060: INFO: Terminating ReplicationController wrapped-volume-race-9e840302-28a7-11ea-bad5-0242ac110005 pods took: 400.353031ms
STEP: Creating RC which spawns configmap-volume pods
Dec 27 12:54:14.945: INFO: Pod name wrapped-volume-race-fc59a25b-28a7-11ea-bad5-0242ac110005: Found 0 pods out of 5
Dec 27 12:54:20.040: INFO: Pod name wrapped-volume-race-fc59a25b-28a7-11ea-bad5-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-fc59a25b-28a7-11ea-bad5-0242ac110005 in namespace e2e-tests-emptydir-wrapper-6g2fh, will wait for the garbage collector to delete the pods
Dec 27 12:56:14.323: INFO: Deleting ReplicationController wrapped-volume-race-fc59a25b-28a7-11ea-bad5-0242ac110005 took: 36.620043ms
Dec 27 12:56:14.923: INFO: Terminating ReplicationController wrapped-volume-race-fc59a25b-28a7-11ea-bad5-0242ac110005 pods took: 600.474914ms
STEP: Creating RC which spawns configmap-volume pods
Dec 27 12:57:04.315: INFO: Pod name wrapped-volume-race-614c7fde-28a8-11ea-bad5-0242ac110005: Found 0 pods out of 5
Dec 27 12:57:09.338: INFO: Pod name wrapped-volume-race-614c7fde-28a8-11ea-bad5-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-614c7fde-28a8-11ea-bad5-0242ac110005 in namespace e2e-tests-emptydir-wrapper-6g2fh, will wait for the garbage collector to delete the pods
Dec 27 12:58:53.921: INFO: Deleting ReplicationController wrapped-volume-race-614c7fde-28a8-11ea-bad5-0242ac110005 took: 91.599533ms
Dec 27 12:58:54.321: INFO: Terminating ReplicationController wrapped-volume-race-614c7fde-28a8-11ea-bad5-0242ac110005 pods took: 400.500918ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 12:59:45.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-6g2fh" for this suite.
Dec 27 12:59:55.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 12:59:55.289: INFO: namespace: e2e-tests-emptydir-wrapper-6g2fh, resource: bindings, ignored listing per whitelist
Dec 27 12:59:55.450: INFO: namespace e2e-tests-emptydir-wrapper-6g2fh deletion completed in 10.239808882s

• [SLOW TEST:499.619 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 12:59:55.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 27 13:03:09.088: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:09.145: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:11.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:11.159: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:13.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:13.271: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:15.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:15.161: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:17.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:17.164: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:19.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:19.188: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:21.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:21.255: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:23.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:23.242: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:25.148: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:25.183: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:27.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:27.180: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:29.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:29.166: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:31.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:31.159: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:33.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:33.165: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:35.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:35.165: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:37.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:37.168: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:39.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:39.830: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:41.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:41.162: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:43.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:43.178: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:45.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:45.230: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:47.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:47.163: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:49.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:49.175: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:51.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:51.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:53.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:53.168: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:55.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:55.183: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:57.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:57.158: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:03:59.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:03:59.160: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:01.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:01.163: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:03.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:03.181: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:05.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:05.168: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:07.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:07.166: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:09.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:09.157: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:11.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:11.168: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:13.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:13.164: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:15.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:15.163: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:17.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:17.161: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:19.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:19.163: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:21.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:21.165: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:23.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:23.162: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:25.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:25.177: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:27.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:27.163: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:29.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:29.190: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:31.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:31.164: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:33.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:33.190: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:35.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:35.157: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:37.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:37.158: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:39.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:39.164: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:41.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:41.160: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:43.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:43.170: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:45.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:45.184: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:47.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:47.165: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:49.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:49.173: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:51.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:51.287: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:53.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:53.473: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:55.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:55.157: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:57.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:57.166: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:04:59.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:04:59.165: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:05:01.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:05:01.161: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 27 13:05:03.146: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 27 13:05:03.173: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:05:03.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vkctg" for this suite.
Dec 27 13:05:29.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:05:29.281: INFO: namespace: e2e-tests-container-lifecycle-hook-vkctg, resource: bindings, ignored listing per whitelist
Dec 27 13:05:29.430: INFO: namespace e2e-tests-container-lifecycle-hook-vkctg deletion completed in 26.245801656s

• [SLOW TEST:333.979 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:05:29.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-8eaa9bb6-28a9-11ea-bad5-0242ac110005
STEP: Creating secret with name s-test-opt-upd-8eaa9def-28a9-11ea-bad5-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-8eaa9bb6-28a9-11ea-bad5-0242ac110005
STEP: Updating secret s-test-opt-upd-8eaa9def-28a9-11ea-bad5-0242ac110005
STEP: Creating secret with name s-test-opt-create-8eaa9e81-28a9-11ea-bad5-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:05:50.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8x7j4" for this suite.
Dec 27 13:06:14.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:06:14.908: INFO: namespace: e2e-tests-projected-8x7j4, resource: bindings, ignored listing per whitelist
Dec 27 13:06:14.915: INFO: namespace e2e-tests-projected-8x7j4 deletion completed in 24.645429706s

• [SLOW TEST:45.484 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:06:14.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-a9c67da9-28a9-11ea-bad5-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-a9c67da9-28a9-11ea-bad5-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:07:58.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-b6kch" for this suite.
Dec 27 13:08:38.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:08:38.916: INFO: namespace: e2e-tests-configmap-b6kch, resource: bindings, ignored listing per whitelist
Dec 27 13:08:38.939: INFO: namespace e2e-tests-configmap-b6kch deletion completed in 40.236776438s

• [SLOW TEST:144.024 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:08:38.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 27 13:08:40.157: INFO: Waiting up to 5m0s for pod "pod-0037de90-28aa-11ea-bad5-0242ac110005" in namespace "e2e-tests-emptydir-z6xqf" to be "success or failure"
Dec 27 13:08:40.171: INFO: Pod "pod-0037de90-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.413843ms
Dec 27 13:08:42.192: INFO: Pod "pod-0037de90-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034985289s
Dec 27 13:08:44.228: INFO: Pod "pod-0037de90-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071536144s
Dec 27 13:08:47.031: INFO: Pod "pod-0037de90-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.874551867s
Dec 27 13:08:49.051: INFO: Pod "pod-0037de90-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.893904327s
Dec 27 13:08:51.092: INFO: Pod "pod-0037de90-28aa-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.935106331s
STEP: Saw pod success
Dec 27 13:08:51.092: INFO: Pod "pod-0037de90-28aa-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 13:08:51.106: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0037de90-28aa-11ea-bad5-0242ac110005 container test-container: 
STEP: delete the pod
Dec 27 13:08:51.551: INFO: Waiting for pod pod-0037de90-28aa-11ea-bad5-0242ac110005 to disappear
Dec 27 13:08:51.560: INFO: Pod pod-0037de90-28aa-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:08:51.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-z6xqf" for this suite.
Dec 27 13:08:57.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:08:57.689: INFO: namespace: e2e-tests-emptydir-z6xqf, resource: bindings, ignored listing per whitelist
Dec 27 13:08:57.908: INFO: namespace e2e-tests-emptydir-z6xqf deletion completed in 6.337518346s

• [SLOW TEST:18.968 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:08:57.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 27 13:09:24.219: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 27 13:09:24.236: INFO: Pod pod-with-prestop-http-hook still exists
Dec 27 13:09:26.236: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 27 13:09:26.618: INFO: Pod pod-with-prestop-http-hook still exists
Dec 27 13:09:28.237: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 27 13:09:28.249: INFO: Pod pod-with-prestop-http-hook still exists
Dec 27 13:09:30.236: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 27 13:09:30.258: INFO: Pod pod-with-prestop-http-hook still exists
Dec 27 13:09:32.236: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 27 13:09:32.258: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:09:32.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-q5f4t" for this suite.
Dec 27 13:09:56.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:09:56.548: INFO: namespace: e2e-tests-container-lifecycle-hook-q5f4t, resource: bindings, ignored listing per whitelist
Dec 27 13:09:56.635: INFO: namespace e2e-tests-container-lifecycle-hook-q5f4t deletion completed in 24.326797697s

• [SLOW TEST:58.727 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:09:56.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 27 13:09:56.780: INFO: Waiting up to 5m0s for pod "client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005" in namespace "e2e-tests-containers-7p4cm" to be "success or failure"
Dec 27 13:09:56.789: INFO: Pod "client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.065062ms
Dec 27 13:09:58.803: INFO: Pod "client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022619704s
Dec 27 13:10:00.841: INFO: Pod "client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060396408s
Dec 27 13:10:03.103: INFO: Pod "client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.32236796s
Dec 27 13:10:05.127: INFO: Pod "client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346926737s
Dec 27 13:10:07.141: INFO: Pod "client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.360743922s
Dec 27 13:10:09.153: INFO: Pod "client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.372737338s
STEP: Saw pod success
Dec 27 13:10:09.153: INFO: Pod "client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 13:10:09.157: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005 container test-container: 
STEP: delete the pod
Dec 27 13:10:09.841: INFO: Waiting for pod client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005 to disappear
Dec 27 13:10:09.880: INFO: Pod client-containers-2de4bc3c-28aa-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:10:09.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-7p4cm" for this suite.
Dec 27 13:10:16.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:10:16.251: INFO: namespace: e2e-tests-containers-7p4cm, resource: bindings, ignored listing per whitelist
Dec 27 13:10:16.288: INFO: namespace e2e-tests-containers-7p4cm deletion completed in 6.376664371s

• [SLOW TEST:19.653 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:10:16.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 27 13:10:16.634: INFO: Waiting up to 5m0s for pod "downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-f2xzj" to be "success or failure"
Dec 27 13:10:16.693: INFO: Pod "downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 59.753586ms
Dec 27 13:10:18.713: INFO: Pod "downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07890514s
Dec 27 13:10:20.732: INFO: Pod "downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098759044s
Dec 27 13:10:23.072: INFO: Pod "downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438389144s
Dec 27 13:10:25.635: INFO: Pod "downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.001025015s
Dec 27 13:10:27.651: INFO: Pod "downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.017796828s
Dec 27 13:10:29.667: INFO: Pod "downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.033211919s
STEP: Saw pod success
Dec 27 13:10:29.667: INFO: Pod "downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 13:10:29.674: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 27 13:10:29.975: INFO: Waiting for pod downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005 to disappear
Dec 27 13:10:30.253: INFO: Pod downward-api-39b7bffc-28aa-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:10:30.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-f2xzj" for this suite.
Dec 27 13:10:38.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:10:38.470: INFO: namespace: e2e-tests-downward-api-f2xzj, resource: bindings, ignored listing per whitelist
Dec 27 13:10:38.700: INFO: namespace e2e-tests-downward-api-f2xzj deletion completed in 8.42535932s

• [SLOW TEST:22.412 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:10:38.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 27 13:10:38.996: INFO: PodSpec: initContainers in spec.initContainers
Dec 27 13:11:54.943: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-470f5eca-28aa-11ea-bad5-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-wz26d", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-wz26d/pods/pod-init-470f5eca-28aa-11ea-bad5-0242ac110005", UID:"4710adda-28aa-11ea-a994-fa163e34d433", ResourceVersion:"16243315", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713049039, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"996257965"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bp7wx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001cacb80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bp7wx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bp7wx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bp7wx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b7ccd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002528360), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b7d1a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b7d1c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b7d1c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b7d1cc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713049039, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713049039, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713049039, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713049039, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000d5b7a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00172f0a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00172f110)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://29d282e602962babb63a9114bab5b56d724b6db50e748bdb072f5bfaacb5a99b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d5b820), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d5b7e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:11:54.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wz26d" for this suite.
Dec 27 13:12:19.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:12:19.380: INFO: namespace: e2e-tests-init-container-wz26d, resource: bindings, ignored listing per whitelist
Dec 27 13:12:19.433: INFO: namespace e2e-tests-init-container-wz26d deletion completed in 24.315075573s

• [SLOW TEST:100.733 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:12:19.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 27 13:12:19.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r2wdd'
Dec 27 13:12:22.108: INFO: stderr: ""
Dec 27 13:12:22.108: INFO: stdout: "pod/pause created\n"
Dec 27 13:12:22.108: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 27 13:12:22.108: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-r2wdd" to be "running and ready"
Dec 27 13:12:22.162: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 53.768733ms
Dec 27 13:12:24.174: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065633373s
Dec 27 13:12:26.191: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082799575s
Dec 27 13:12:28.209: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100440193s
Dec 27 13:12:30.283: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174641451s
Dec 27 13:12:32.356: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.24767893s
Dec 27 13:12:32.356: INFO: Pod "pause" satisfied condition "running and ready"
Dec 27 13:12:32.356: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 27 13:12:32.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-r2wdd'
Dec 27 13:12:32.574: INFO: stderr: ""
Dec 27 13:12:32.574: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 27 13:12:32.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-r2wdd'
Dec 27 13:12:32.708: INFO: stderr: ""
Dec 27 13:12:32.708: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 27 13:12:32.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-r2wdd'
Dec 27 13:12:32.878: INFO: stderr: ""
Dec 27 13:12:32.878: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 27 13:12:32.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-r2wdd'
Dec 27 13:12:33.145: INFO: stderr: ""
Dec 27 13:12:33.145: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 27 13:12:33.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r2wdd'
Dec 27 13:12:33.443: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 27 13:12:33.443: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 27 13:12:33.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-r2wdd'
Dec 27 13:12:33.643: INFO: stderr: "No resources found.\n"
Dec 27 13:12:33.643: INFO: stdout: ""
Dec 27 13:12:33.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-r2wdd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 27 13:12:33.766: INFO: stderr: ""
Dec 27 13:12:33.766: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:12:33.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r2wdd" for this suite.
Dec 27 13:12:39.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:12:39.901: INFO: namespace: e2e-tests-kubectl-r2wdd, resource: bindings, ignored listing per whitelist
Dec 27 13:12:40.145: INFO: namespace e2e-tests-kubectl-r2wdd deletion completed in 6.349152101s

• [SLOW TEST:20.711 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:12:40.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 27 13:12:40.869: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-xqgqt" to be "success or failure"
Dec 27 13:12:40.901: INFO: Pod "downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.426014ms
Dec 27 13:12:43.141: INFO: Pod "downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271924183s
Dec 27 13:12:45.151: INFO: Pod "downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282472511s
Dec 27 13:12:47.172: INFO: Pod "downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303267562s
Dec 27 13:12:49.187: INFO: Pod "downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31783241s
Dec 27 13:12:51.336: INFO: Pod "downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.466789177s
Dec 27 13:12:53.357: INFO: Pod "downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.48821472s
STEP: Saw pod success
Dec 27 13:12:53.357: INFO: Pod "downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 13:12:53.373: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005 container client-container: 
STEP: delete the pod
Dec 27 13:12:54.407: INFO: Waiting for pod downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005 to disappear
Dec 27 13:12:54.421: INFO: Pod downwardapi-volume-8f98f70e-28aa-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:12:54.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xqgqt" for this suite.
Dec 27 13:13:00.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:13:00.735: INFO: namespace: e2e-tests-projected-xqgqt, resource: bindings, ignored listing per whitelist
Dec 27 13:13:00.741: INFO: namespace e2e-tests-projected-xqgqt deletion completed in 6.303923883s

• [SLOW TEST:20.596 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:13:00.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-9bb11af3-28aa-11ea-bad5-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 27 13:13:01.005: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-9fgfk" to be "success or failure"
Dec 27 13:13:01.010: INFO: Pod "pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.312541ms
Dec 27 13:13:03.550: INFO: Pod "pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544799318s
Dec 27 13:13:05.582: INFO: Pod "pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.577056553s
Dec 27 13:13:07.728: INFO: Pod "pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723427005s
Dec 27 13:13:09.750: INFO: Pod "pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.745461609s
Dec 27 13:13:12.360: INFO: Pod "pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.35528893s
Dec 27 13:13:14.373: INFO: Pod "pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.36781944s
STEP: Saw pod success
Dec 27 13:13:14.373: INFO: Pod "pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 13:13:14.376: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 27 13:13:16.495: INFO: Waiting for pod pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005 to disappear
Dec 27 13:13:16.517: INFO: Pod pod-projected-configmaps-9bb27f83-28aa-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:13:16.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9fgfk" for this suite.
Dec 27 13:13:22.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:13:22.937: INFO: namespace: e2e-tests-projected-9fgfk, resource: bindings, ignored listing per whitelist
Dec 27 13:13:23.016: INFO: namespace e2e-tests-projected-9fgfk deletion completed in 6.481270568s

• [SLOW TEST:22.275 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:13:23.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 27 13:13:23.225: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:13:23.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vzldv" for this suite.
Dec 27 13:13:29.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:13:29.512: INFO: namespace: e2e-tests-kubectl-vzldv, resource: bindings, ignored listing per whitelist
Dec 27 13:13:29.580: INFO: namespace e2e-tests-kubectl-vzldv deletion completed in 6.213193246s

• [SLOW TEST:6.564 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:13:29.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 27 13:13:29.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005" in namespace "e2e-tests-downward-api-pclkt" to be "success or failure"
Dec 27 13:13:29.812: INFO: Pod "downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.985699ms
Dec 27 13:13:31.847: INFO: Pod "downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04987571s
Dec 27 13:13:33.881: INFO: Pod "downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083549975s
Dec 27 13:13:36.911: INFO: Pod "downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.113511241s
Dec 27 13:13:38.927: INFO: Pod "downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.129356932s
Dec 27 13:13:40.936: INFO: Pod "downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.138241694s
Dec 27 13:13:42.968: INFO: Pod "downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.170104297s
STEP: Saw pod success
Dec 27 13:13:42.968: INFO: Pod "downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 13:13:42.984: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005 container client-container: 
STEP: delete the pod
Dec 27 13:13:45.198: INFO: Waiting for pod downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005 to disappear
Dec 27 13:13:46.285: INFO: Pod downwardapi-volume-acd8366b-28aa-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:13:46.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pclkt" for this suite.
Dec 27 13:13:52.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:13:52.579: INFO: namespace: e2e-tests-downward-api-pclkt, resource: bindings, ignored listing per whitelist
Dec 27 13:13:52.732: INFO: namespace e2e-tests-downward-api-pclkt deletion completed in 6.391680773s

• [SLOW TEST:23.152 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:13:52.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2h9qm
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 27 13:13:53.059: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 27 13:14:29.292: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-2h9qm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 27 13:14:29.292: INFO: >>> kubeConfig: /root/.kube/config
Dec 27 13:14:30.817: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:14:30.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-2h9qm" for this suite.
Dec 27 13:14:54.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:14:55.196: INFO: namespace: e2e-tests-pod-network-test-2h9qm, resource: bindings, ignored listing per whitelist
Dec 27 13:14:55.200: INFO: namespace e2e-tests-pod-network-test-2h9qm deletion completed in 24.33389836s

• [SLOW TEST:62.468 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:14:55.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-dfd8c343-28aa-11ea-bad5-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:15:07.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-g8mbn" for this suite.
Dec 27 13:15:31.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:15:31.659: INFO: namespace: e2e-tests-configmap-g8mbn, resource: bindings, ignored listing per whitelist
Dec 27 13:15:31.771: INFO: namespace e2e-tests-configmap-g8mbn deletion completed in 24.196574975s

• [SLOW TEST:36.571 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:15:31.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 27 13:15:31.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-j99l5'
Dec 27 13:15:32.153: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 27 13:15:32.154: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 27 13:15:32.195: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-5ljcw]
Dec 27 13:15:32.195: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-5ljcw" in namespace "e2e-tests-kubectl-j99l5" to be "running and ready"
Dec 27 13:15:32.346: INFO: Pod "e2e-test-nginx-rc-5ljcw": Phase="Pending", Reason="", readiness=false. Elapsed: 150.656513ms
Dec 27 13:15:34.367: INFO: Pod "e2e-test-nginx-rc-5ljcw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171365402s
Dec 27 13:15:36.409: INFO: Pod "e2e-test-nginx-rc-5ljcw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213613422s
Dec 27 13:15:38.552: INFO: Pod "e2e-test-nginx-rc-5ljcw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356740034s
Dec 27 13:15:40.579: INFO: Pod "e2e-test-nginx-rc-5ljcw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.383346911s
Dec 27 13:15:42.596: INFO: Pod "e2e-test-nginx-rc-5ljcw": Phase="Running", Reason="", readiness=true. Elapsed: 10.401088702s
Dec 27 13:15:42.596: INFO: Pod "e2e-test-nginx-rc-5ljcw" satisfied condition "running and ready"
Dec 27 13:15:42.596: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-5ljcw]
Dec 27 13:15:42.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j99l5'
Dec 27 13:15:42.958: INFO: stderr: ""
Dec 27 13:15:42.958: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 27 13:15:42.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j99l5'
Dec 27 13:15:43.153: INFO: stderr: ""
Dec 27 13:15:43.153: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:15:43.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j99l5" for this suite.
Dec 27 13:16:07.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:16:07.506: INFO: namespace: e2e-tests-kubectl-j99l5, resource: bindings, ignored listing per whitelist
Dec 27 13:16:07.546: INFO: namespace e2e-tests-kubectl-j99l5 deletion completed in 24.336526672s

• [SLOW TEST:35.774 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:16:07.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Dec 27 13:16:07.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 27 13:16:07.950: INFO: stderr: ""
Dec 27 13:16:07.950: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:16:07.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rwpzm" for this suite.
Dec 27 13:16:14.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:16:14.151: INFO: namespace: e2e-tests-kubectl-rwpzm, resource: bindings, ignored listing per whitelist
Dec 27 13:16:14.165: INFO: namespace e2e-tests-kubectl-rwpzm deletion completed in 6.207908401s

• [SLOW TEST:6.619 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:16:14.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 27 13:16:14.419: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 27 13:16:19.450: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 27 13:16:25.487: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 27 13:16:25.593: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-g98rr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g98rr/deployments/test-cleanup-deployment,UID:15980b39-28ab-11ea-a994-fa163e34d433,ResourceVersion:16243896,Generation:1,CreationTimestamp:2019-12-27 13:16:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 27 13:16:25.605: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:16:25.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-g98rr" for this suite.
Dec 27 13:16:33.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:16:33.881: INFO: namespace: e2e-tests-deployment-g98rr, resource: bindings, ignored listing per whitelist
Dec 27 13:16:33.923: INFO: namespace e2e-tests-deployment-g98rr deletion completed in 8.264978232s

• [SLOW TEST:19.757 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:16:33.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nv256 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-nv256;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nv256 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-nv256;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nv256.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-nv256.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nv256.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-nv256.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nv256.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nv256.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nv256.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nv256.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nv256.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 117.2.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.2.117_udp@PTR;check="$$(dig +tcp +noall +answer +search 117.2.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.2.117_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nv256 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-nv256;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nv256 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-nv256;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nv256.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-nv256.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nv256.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-nv256.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nv256.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nv256.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nv256.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nv256.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nv256.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 117.2.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.2.117_udp@PTR;check="$$(dig +tcp +noall +answer +search 117.2.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.2.117_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 27 13:16:51.482: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nv256 from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.493: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nv256 from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.507: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.517: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.522: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.526: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.531: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.535: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.539: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.544: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.549: INFO: Unable to read 10.100.2.117_udp@PTR from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.557: INFO: Unable to read 10.100.2.117_tcp@PTR from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.570: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.617: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.624: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nv256 from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.630: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nv256 from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.635: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.639: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.644: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.652: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.657: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.663: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.679: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.696: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.703: INFO: Unable to read 10.100.2.117_udp@PTR from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.712: INFO: Unable to read 10.100.2.117_tcp@PTR from pod e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005: the server could not find the requested resource (get pods dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005)
Dec 27 13:16:51.712: INFO: Lookups using e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005 failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-nv256 wheezy_tcp@dns-test-service.e2e-tests-dns-nv256 wheezy_udp@dns-test-service.e2e-tests-dns-nv256.svc wheezy_tcp@dns-test-service.e2e-tests-dns-nv256.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.100.2.117_udp@PTR 10.100.2.117_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nv256 jessie_tcp@dns-test-service.e2e-tests-dns-nv256 jessie_udp@dns-test-service.e2e-tests-dns-nv256.svc jessie_tcp@dns-test-service.e2e-tests-dns-nv256.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nv256.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-nv256.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.100.2.117_udp@PTR 10.100.2.117_tcp@PTR]

Dec 27 13:16:57.104: INFO: DNS probes using e2e-tests-dns-nv256/dns-test-1b383d3d-28ab-11ea-bad5-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:16:57.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-nv256" for this suite.
Dec 27 13:17:05.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:17:05.765: INFO: namespace: e2e-tests-dns-nv256, resource: bindings, ignored listing per whitelist
Dec 27 13:17:05.878: INFO: namespace e2e-tests-dns-nv256 deletion completed in 8.251478594s

• [SLOW TEST:31.955 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:17:05.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 27 13:17:06.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:06.772: INFO: stderr: ""
Dec 27 13:17:06.772: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 27 13:17:06.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:06.942: INFO: stderr: ""
Dec 27 13:17:06.942: INFO: stdout: "update-demo-nautilus-jmk2w update-demo-nautilus-xhkwz "
Dec 27 13:17:06.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmk2w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:07.112: INFO: stderr: ""
Dec 27 13:17:07.112: INFO: stdout: ""
Dec 27 13:17:07.112: INFO: update-demo-nautilus-jmk2w is created but not running
Dec 27 13:17:12.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:13.020: INFO: stderr: ""
Dec 27 13:17:13.020: INFO: stdout: "update-demo-nautilus-jmk2w update-demo-nautilus-xhkwz "
Dec 27 13:17:13.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmk2w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:13.844: INFO: stderr: ""
Dec 27 13:17:13.844: INFO: stdout: ""
Dec 27 13:17:13.844: INFO: update-demo-nautilus-jmk2w is created but not running
Dec 27 13:17:18.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:19.021: INFO: stderr: ""
Dec 27 13:17:19.021: INFO: stdout: "update-demo-nautilus-jmk2w update-demo-nautilus-xhkwz "
Dec 27 13:17:19.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmk2w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:19.156: INFO: stderr: ""
Dec 27 13:17:19.156: INFO: stdout: "true"
Dec 27 13:17:19.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmk2w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:19.370: INFO: stderr: ""
Dec 27 13:17:19.370: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 27 13:17:19.370: INFO: validating pod update-demo-nautilus-jmk2w
Dec 27 13:17:19.405: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 27 13:17:19.405: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 27 13:17:19.405: INFO: update-demo-nautilus-jmk2w is verified up and running
Dec 27 13:17:19.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xhkwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:19.578: INFO: stderr: ""
Dec 27 13:17:19.578: INFO: stdout: "true"
Dec 27 13:17:19.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xhkwz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:19.719: INFO: stderr: ""
Dec 27 13:17:19.719: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 27 13:17:19.719: INFO: validating pod update-demo-nautilus-xhkwz
Dec 27 13:17:19.733: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 27 13:17:19.733: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 27 13:17:19.733: INFO: update-demo-nautilus-xhkwz is verified up and running
STEP: using delete to clean up resources
Dec 27 13:17:19.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:19.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 27 13:17:19.995: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 27 13:17:19.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-5wzxz'
Dec 27 13:17:20.441: INFO: stderr: "No resources found.\n"
Dec 27 13:17:20.441: INFO: stdout: ""
Dec 27 13:17:20.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-5wzxz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 27 13:17:20.587: INFO: stderr: ""
Dec 27 13:17:20.587: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:17:20.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5wzxz" for this suite.
Dec 27 13:17:46.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:17:46.944: INFO: namespace: e2e-tests-kubectl-5wzxz, resource: bindings, ignored listing per whitelist
Dec 27 13:17:46.946: INFO: namespace e2e-tests-kubectl-5wzxz deletion completed in 26.318125194s

• [SLOW TEST:41.069 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:17:46.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:17:57.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-q2hxq" for this suite.
Dec 27 13:18:03.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:18:03.684: INFO: namespace: e2e-tests-emptydir-wrapper-q2hxq, resource: bindings, ignored listing per whitelist
Dec 27 13:18:03.715: INFO: namespace e2e-tests-emptydir-wrapper-q2hxq deletion completed in 6.250329251s

• [SLOW TEST:16.769 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:18:03.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-50545bac-28ab-11ea-bad5-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 27 13:18:04.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005" in namespace "e2e-tests-configmap-gpk5v" to be "success or failure"
Dec 27 13:18:04.232: INFO: Pod "pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.043669ms
Dec 27 13:18:06.651: INFO: Pod "pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474680565s
Dec 27 13:18:08.667: INFO: Pod "pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490916306s
Dec 27 13:18:10.723: INFO: Pod "pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.547247665s
Dec 27 13:18:12.736: INFO: Pod "pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559952045s
Dec 27 13:18:14.748: INFO: Pod "pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.572071318s
STEP: Saw pod success
Dec 27 13:18:14.748: INFO: Pod "pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 13:18:14.752: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 27 13:18:15.454: INFO: Waiting for pod pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005 to disappear
Dec 27 13:18:15.811: INFO: Pod pod-configmaps-50570d42-28ab-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:18:15.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gpk5v" for this suite.
Dec 27 13:18:23.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:18:24.022: INFO: namespace: e2e-tests-configmap-gpk5v, resource: bindings, ignored listing per whitelist
Dec 27 13:18:24.226: INFO: namespace e2e-tests-configmap-gpk5v deletion completed in 8.382639723s

• [SLOW TEST:20.510 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:18:24.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 27 13:18:35.318: INFO: Successfully updated pod "annotationupdate5c91aa1e-28ab-11ea-bad5-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:18:37.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-scg8q" for this suite.
Dec 27 13:19:05.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:19:05.637: INFO: namespace: e2e-tests-downward-api-scg8q, resource: bindings, ignored listing per whitelist
Dec 27 13:19:05.676: INFO: namespace e2e-tests-downward-api-scg8q deletion completed in 28.260698171s

• [SLOW TEST:41.449 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:19:05.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 27 13:19:05.964: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.442144ms)
Dec 27 13:19:05.971: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.550953ms)
Dec 27 13:19:05.976: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.764276ms)
Dec 27 13:19:05.981: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.511394ms)
Dec 27 13:19:05.985: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.341506ms)
Dec 27 13:19:05.989: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.18861ms)
Dec 27 13:19:05.994: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.422288ms)
Dec 27 13:19:05.998: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.281729ms)
Dec 27 13:19:06.004: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.299274ms)
Dec 27 13:19:06.009: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.193503ms)
Dec 27 13:19:06.015: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.889416ms)
Dec 27 13:19:06.021: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.157314ms)
Dec 27 13:19:06.027: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.800046ms)
Dec 27 13:19:06.033: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.06566ms)
Dec 27 13:19:06.045: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.062923ms)
Dec 27 13:19:06.050: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.782906ms)
Dec 27 13:19:06.057: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.595779ms)
Dec 27 13:19:06.065: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.696641ms)
Dec 27 13:19:06.074: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.763095ms)
Dec 27 13:19:06.081: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.684227ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:19:06.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-sp54f" for this suite.
Dec 27 13:19:12.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:19:12.337: INFO: namespace: e2e-tests-proxy-sp54f, resource: bindings, ignored listing per whitelist
Dec 27 13:19:12.407: INFO: namespace e2e-tests-proxy-sp54f deletion completed in 6.320756778s

• [SLOW TEST:6.731 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:19:12.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 27 13:19:24.858: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-79443fa1-28ab-11ea-bad5-0242ac110005,GenerateName:,Namespace:e2e-tests-events-w9wlv,SelfLink:/api/v1/namespaces/e2e-tests-events-w9wlv/pods/send-events-79443fa1-28ab-11ea-bad5-0242ac110005,UID:7946a980-28ab-11ea-a994-fa163e34d433,ResourceVersion:16244331,Generation:0,CreationTimestamp:2019-12-27 13:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 725608894,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vx6sj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vx6sj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-vx6sj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029758a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029758c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 13:19:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 13:19:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 13:19:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 13:19:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-27 13:19:12 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-27 13:19:22 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://94009a55e99b81d3361f41902efb71e6c551a95e5a6a3cf5ee3092c3682315bb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 27 13:19:26.944: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 27 13:19:28.976: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:19:29.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-w9wlv" for this suite.
Dec 27 13:20:11.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:20:11.631: INFO: namespace: e2e-tests-events-w9wlv, resource: bindings, ignored listing per whitelist
Dec 27 13:20:11.670: INFO: namespace e2e-tests-events-w9wlv deletion completed in 42.3225131s

• [SLOW TEST:59.263 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:20:11.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 27 13:20:12.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 27 13:20:12.127: INFO: stderr: ""
Dec 27 13:20:12.127: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 27 13:20:12.137: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:20:12.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c7tjs" for this suite.
Dec 27 13:20:18.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:20:18.257: INFO: namespace: e2e-tests-kubectl-c7tjs, resource: bindings, ignored listing per whitelist
Dec 27 13:20:18.382: INFO: namespace e2e-tests-kubectl-c7tjs deletion completed in 6.222224137s

S [SKIPPING] [6.712 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 27 13:20:12.137: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:20:18.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 27 13:20:18.684: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:20:45.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-w5r4l" for this suite.
Dec 27 13:21:09.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:21:09.931: INFO: namespace: e2e-tests-init-container-w5r4l, resource: bindings, ignored listing per whitelist
Dec 27 13:21:10.049: INFO: namespace e2e-tests-init-container-w5r4l deletion completed in 24.375187084s

• [SLOW TEST:51.667 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:21:10.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-bf58f2bf-28ab-11ea-bad5-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 27 13:21:10.380: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005" in namespace "e2e-tests-projected-8fprw" to be "success or failure"
Dec 27 13:21:10.401: INFO: Pod "pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.828408ms
Dec 27 13:21:12.582: INFO: Pod "pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201513381s
Dec 27 13:21:14.606: INFO: Pod "pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22538403s
Dec 27 13:21:16.986: INFO: Pod "pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.605396609s
Dec 27 13:21:19.006: INFO: Pod "pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.625249316s
Dec 27 13:21:21.016: INFO: Pod "pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.635883519s
STEP: Saw pod success
Dec 27 13:21:21.016: INFO: Pod "pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005" satisfied condition "success or failure"
Dec 27 13:21:21.021: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 27 13:21:21.622: INFO: Waiting for pod pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005 to disappear
Dec 27 13:21:22.077: INFO: Pod pod-projected-secrets-bf59a68e-28ab-11ea-bad5-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:21:22.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8fprw" for this suite.
Dec 27 13:21:28.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:21:28.251: INFO: namespace: e2e-tests-projected-8fprw, resource: bindings, ignored listing per whitelist
Dec 27 13:21:28.266: INFO: namespace e2e-tests-projected-8fprw deletion completed in 6.175051691s

• [SLOW TEST:18.217 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:21:28.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 27 13:21:28.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:21:39.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gvhjx" for this suite.
Dec 27 13:22:35.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:22:35.457: INFO: namespace: e2e-tests-pods-gvhjx, resource: bindings, ignored listing per whitelist
Dec 27 13:22:35.511: INFO: namespace e2e-tests-pods-gvhjx deletion completed in 56.221490429s

• [SLOW TEST:67.244 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:22:35.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-f258d811-28ab-11ea-bad5-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-f258d894-28ab-11ea-bad5-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f258d811-28ab-11ea-bad5-0242ac110005
STEP: Updating configmap cm-test-opt-upd-f258d894-28ab-11ea-bad5-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-f258d907-28ab-11ea-bad5-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:22:56.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w2v5r" for this suite.
Dec 27 13:23:22.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:23:22.990: INFO: namespace: e2e-tests-projected-w2v5r, resource: bindings, ignored listing per whitelist
Dec 27 13:23:23.341: INFO: namespace e2e-tests-projected-w2v5r deletion completed in 26.445793814s

• [SLOW TEST:47.830 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:23:23.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 27 13:23:36.304: INFO: Successfully updated pod "annotationupdate0eca29f7-28ac-11ea-bad5-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:23:38.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sfz4m" for this suite.
Dec 27 13:24:04.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:24:04.648: INFO: namespace: e2e-tests-projected-sfz4m, resource: bindings, ignored listing per whitelist
Dec 27 13:24:04.780: INFO: namespace e2e-tests-projected-sfz4m deletion completed in 26.298504199s

• [SLOW TEST:41.438 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 27 13:24:04.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 27 13:24:05.119: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-s6t6w" to be "success or failure"
Dec 27 13:24:05.127: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.907472ms
Dec 27 13:24:07.436: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316832532s
Dec 27 13:24:09.476: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356903925s
Dec 27 13:24:12.046: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.926730245s
Dec 27 13:24:14.085: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.965389612s
Dec 27 13:24:16.123: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.00414351s
Dec 27 13:24:18.137: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.018060156s
Dec 27 13:24:20.188: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.069004747s
Dec 27 13:24:22.527: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.408187988s
STEP: Saw pod success
Dec 27 13:24:22.528: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 27 13:24:22.574: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 27 13:24:23.123: INFO: Waiting for pod pod-host-path-test to disappear
Dec 27 13:24:23.157: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 27 13:24:23.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-s6t6w" for this suite.
Dec 27 13:24:31.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 27 13:24:31.520: INFO: namespace: e2e-tests-hostpath-s6t6w, resource: bindings, ignored listing per whitelist
Dec 27 13:24:31.619: INFO: namespace e2e-tests-hostpath-s6t6w deletion completed in 8.241899852s

• [SLOW TEST:26.839 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSDec 27 13:24:31.619: INFO: Running AfterSuite actions on all nodes
Dec 27 13:24:31.619: INFO: Running AfterSuite actions on node 1
Dec 27 13:24:31.619: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9425.168 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS