I0503 10:46:43.709888 6 e2e.go:224] Starting e2e run "60a214d1-8d2b-11ea-b78d-0242ac110017" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588502803 - Will randomize all specs Will run 201 of 2164 specs May 3 10:46:43.901: INFO: >>> kubeConfig: /root/.kube/config May 3 10:46:43.905: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 3 10:46:43.921: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 3 10:46:43.955: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 3 10:46:43.955: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 3 10:46:43.955: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 3 10:46:43.965: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 3 10:46:43.965: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 3 10:46:43.965: INFO: e2e test version: v1.13.12 May 3 10:46:43.966: INFO: kube-apiserver version: v1.13.12 SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:46:43.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe May 3 10:46:44.140: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-kqnrt May 3 10:46:48.165: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-kqnrt STEP: checking the pod's current state and verifying that restartCount is present May 3 10:46:48.168: INFO: Initial restart count of pod liveness-http is 0 May 3 10:47:14.227: INFO: Restart count of pod e2e-tests-container-probe-kqnrt/liveness-http is now 1 (26.059090837s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:47:14.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-kqnrt" for this suite. May 3 10:47:20.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:47:20.368: INFO: namespace: e2e-tests-container-probe-kqnrt, resource: bindings, ignored listing per whitelist May 3 10:47:20.387: INFO: namespace e2e-tests-container-probe-kqnrt deletion completed in 6.102382387s • [SLOW TEST:36.421 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:47:20.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 10:47:20.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76d53437-8d2b-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-ghjk9" to be "success or failure" May 3 10:47:20.535: INFO: Pod "downwardapi-volume-76d53437-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.880291ms May 3 10:47:22.540: INFO: Pod "downwardapi-volume-76d53437-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016280482s May 3 10:47:24.544: INFO: Pod "downwardapi-volume-76d53437-8d2b-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020905283s STEP: Saw pod success May 3 10:47:24.544: INFO: Pod "downwardapi-volume-76d53437-8d2b-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:47:24.548: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-76d53437-8d2b-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 10:47:24.574: INFO: Waiting for pod downwardapi-volume-76d53437-8d2b-11ea-b78d-0242ac110017 to disappear May 3 10:47:24.602: INFO: Pod downwardapi-volume-76d53437-8d2b-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:47:24.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ghjk9" for this suite. May 3 10:47:30.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:47:30.697: INFO: namespace: e2e-tests-downward-api-ghjk9, resource: bindings, ignored listing per whitelist May 3 10:47:30.760: INFO: namespace e2e-tests-downward-api-ghjk9 deletion completed in 6.154230775s • [SLOW TEST:10.374 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:47:30.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 3 10:47:30.868: INFO: Waiting up to 5m0s for pod "client-containers-7d0124bd-8d2b-11ea-b78d-0242ac110017" in namespace "e2e-tests-containers-9nhkl" to be "success or failure" May 3 10:47:30.872: INFO: Pod "client-containers-7d0124bd-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.219765ms May 3 10:47:32.876: INFO: Pod "client-containers-7d0124bd-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008039366s May 3 10:47:34.897: INFO: Pod "client-containers-7d0124bd-8d2b-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029361101s STEP: Saw pod success May 3 10:47:34.897: INFO: Pod "client-containers-7d0124bd-8d2b-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:47:34.901: INFO: Trying to get logs from node hunter-worker pod client-containers-7d0124bd-8d2b-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 10:47:35.036: INFO: Waiting for pod client-containers-7d0124bd-8d2b-11ea-b78d-0242ac110017 to disappear May 3 10:47:35.064: INFO: Pod client-containers-7d0124bd-8d2b-11ea-b78d-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:47:35.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-9nhkl" for this suite. May 3 10:47:41.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:47:41.198: INFO: namespace: e2e-tests-containers-9nhkl, resource: bindings, ignored listing per whitelist May 3 10:47:41.206: INFO: namespace e2e-tests-containers-9nhkl deletion completed in 6.139149625s • [SLOW TEST:10.446 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:47:41.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-833db91a-8d2b-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 10:47:41.418: INFO: Waiting up to 5m0s for pod "pod-secrets-834b9f4b-8d2b-11ea-b78d-0242ac110017" in namespace "e2e-tests-secrets-8jhh8" to be "success or failure" May 3 10:47:41.431: INFO: Pod "pod-secrets-834b9f4b-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.605933ms May 3 10:47:43.435: INFO: Pod "pod-secrets-834b9f4b-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01695981s May 3 10:47:45.440: INFO: Pod "pod-secrets-834b9f4b-8d2b-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02197285s STEP: Saw pod success May 3 10:47:45.440: INFO: Pod "pod-secrets-834b9f4b-8d2b-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:47:45.444: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-834b9f4b-8d2b-11ea-b78d-0242ac110017 container secret-volume-test: STEP: delete the pod May 3 10:47:45.719: INFO: Waiting for pod pod-secrets-834b9f4b-8d2b-11ea-b78d-0242ac110017 to disappear May 3 10:47:45.746: INFO: Pod pod-secrets-834b9f4b-8d2b-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:47:45.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8jhh8" for this suite. May 3 10:47:51.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:47:51.781: INFO: namespace: e2e-tests-secrets-8jhh8, resource: bindings, ignored listing per whitelist May 3 10:47:51.839: INFO: namespace e2e-tests-secrets-8jhh8 deletion completed in 6.089296146s STEP: Destroying namespace "e2e-tests-secret-namespace-mghp6" for this suite. May 3 10:47:57.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:47:57.872: INFO: namespace: e2e-tests-secret-namespace-mghp6, resource: bindings, ignored listing per whitelist May 3 10:47:57.931: INFO: namespace e2e-tests-secret-namespace-mghp6 deletion completed in 6.092380121s • [SLOW TEST:16.724 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:47:57.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 3 10:47:58.016: INFO: namespace e2e-tests-kubectl-dvhqf May 3 10:47:58.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dvhqf' May 3 10:48:00.722: INFO: stderr: "" May 3 10:48:00.722: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 3 10:48:01.726: INFO: Selector matched 1 pods for map[app:redis] May 3 10:48:01.726: INFO: Found 0 / 1 May 3 10:48:02.727: INFO: Selector matched 1 pods for map[app:redis] May 3 10:48:02.727: INFO: Found 0 / 1 May 3 10:48:03.727: INFO: Selector matched 1 pods for map[app:redis] May 3 10:48:03.727: INFO: Found 0 / 1 May 3 10:48:04.727: INFO: Selector matched 1 pods for map[app:redis] May 3 10:48:04.727: INFO: Found 1 / 1 May 3 10:48:04.727: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 3 10:48:04.731: INFO: Selector matched 1 pods for map[app:redis] May 3 10:48:04.731: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 3 10:48:04.731: INFO: wait on redis-master startup in e2e-tests-kubectl-dvhqf May 3 10:48:04.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vm52f redis-master --namespace=e2e-tests-kubectl-dvhqf' May 3 10:48:04.844: INFO: stderr: "" May 3 10:48:04.844: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 May 10:48:03.660 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 May 10:48:03.660 # Server started, Redis version 3.2.12\n1:M 03 May 10:48:03.660 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 May 10:48:03.660 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 3 10:48:04.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-dvhqf' May 3 10:48:04.980: INFO: stderr: "" May 3 10:48:04.980: INFO: stdout: "service/rm2 exposed\n" May 3 10:48:04.992: INFO: Service rm2 in namespace e2e-tests-kubectl-dvhqf found. STEP: exposing service May 3 10:48:07.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-dvhqf' May 3 10:48:07.196: INFO: stderr: "" May 3 10:48:07.196: INFO: stdout: "service/rm3 exposed\n" May 3 10:48:07.203: INFO: Service rm3 in namespace e2e-tests-kubectl-dvhqf found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:48:09.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dvhqf" for this suite. May 3 10:48:33.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:48:33.306: INFO: namespace: e2e-tests-kubectl-dvhqf, resource: bindings, ignored listing per whitelist May 3 10:48:33.370: INFO: namespace e2e-tests-kubectl-dvhqf deletion completed in 24.125774735s • [SLOW TEST:35.439 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:48:33.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 3 10:48:38.392: INFO: Pod pod-hostip-a2bb005f-8d2b-11ea-b78d-0242ac110017 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:48:38.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-npz28" for this suite. May 3 10:49:00.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:49:00.461: INFO: namespace: e2e-tests-pods-npz28, resource: bindings, ignored listing per whitelist May 3 10:49:00.490: INFO: namespace e2e-tests-pods-npz28 deletion completed in 22.094913131s • [SLOW TEST:27.120 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:49:00.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 3 10:49:00.672: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-8svz9" to be "success or failure" May 3 10:49:00.714: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 41.269487ms May 3 10:49:02.718: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046089662s May 3 10:49:04.722: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049597759s May 3 10:49:06.726: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053760097s STEP: Saw pod success May 3 10:49:06.726: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 3 10:49:06.729: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 3 10:49:06.763: INFO: Waiting for pod pod-host-path-test to disappear May 3 10:49:06.814: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:49:06.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-8svz9" for this suite. May 3 10:49:12.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:49:12.859: INFO: namespace: e2e-tests-hostpath-8svz9, resource: bindings, ignored listing per whitelist May 3 10:49:12.906: INFO: namespace e2e-tests-hostpath-8svz9 deletion completed in 6.088395169s • [SLOW TEST:12.416 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:49:12.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 10:49:13.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9fdd96a-8d2b-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-lgmt7" to be "success or failure" May 3 10:49:13.282: INFO: Pod "downwardapi-volume-b9fdd96a-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 87.354319ms May 3 10:49:15.286: INFO: Pod "downwardapi-volume-b9fdd96a-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091379921s May 3 10:49:17.290: INFO: Pod "downwardapi-volume-b9fdd96a-8d2b-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095540164s STEP: Saw pod success May 3 10:49:17.290: INFO: Pod "downwardapi-volume-b9fdd96a-8d2b-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:49:17.293: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b9fdd96a-8d2b-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 10:49:17.473: INFO: Waiting for pod downwardapi-volume-b9fdd96a-8d2b-11ea-b78d-0242ac110017 to disappear May 3 10:49:17.527: INFO: Pod downwardapi-volume-b9fdd96a-8d2b-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:49:17.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lgmt7" for this suite. May 3 10:49:23.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:49:23.627: INFO: namespace: e2e-tests-projected-lgmt7, resource: bindings, ignored listing per whitelist May 3 10:49:23.673: INFO: namespace e2e-tests-projected-lgmt7 deletion completed in 6.142510597s • [SLOW TEST:10.767 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:49:23.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-c04e212d-8d2b-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 10:49:23.793: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c050da2b-8d2b-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-9p87w" to be "success or failure" May 3 10:49:23.797: INFO: Pod "pod-projected-configmaps-c050da2b-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033632ms May 3 10:49:25.802: INFO: Pod "pod-projected-configmaps-c050da2b-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008636037s May 3 10:49:27.806: INFO: Pod "pod-projected-configmaps-c050da2b-8d2b-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013101775s STEP: Saw pod success May 3 10:49:27.806: INFO: Pod "pod-projected-configmaps-c050da2b-8d2b-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:49:27.810: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-c050da2b-8d2b-11ea-b78d-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 3 10:49:27.834: INFO: Waiting for pod pod-projected-configmaps-c050da2b-8d2b-11ea-b78d-0242ac110017 to disappear May 3 10:49:27.838: INFO: Pod pod-projected-configmaps-c050da2b-8d2b-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:49:27.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9p87w" for this suite. May 3 10:49:33.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:49:33.948: INFO: namespace: e2e-tests-projected-9p87w, resource: bindings, ignored listing per whitelist May 3 10:49:33.954: INFO: namespace e2e-tests-projected-9p87w deletion completed in 6.112925787s • [SLOW TEST:10.281 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:49:33.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-znlgk STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-znlgk STEP: Deleting pre-stop pod May 3 10:49:47.393: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:49:47.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-znlgk" for this suite. May 3 10:50:27.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:50:27.477: INFO: namespace: e2e-tests-prestop-znlgk, resource: bindings, ignored listing per whitelist May 3 10:50:27.482: INFO: namespace e2e-tests-prestop-znlgk deletion completed in 40.072491532s • [SLOW TEST:53.527 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:50:27.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:50:31.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-rn5sx" for this suite. May 3 10:50:37.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:50:37.710: INFO: namespace: e2e-tests-kubelet-test-rn5sx, resource: bindings, ignored listing per whitelist May 3 10:50:37.766: INFO: namespace e2e-tests-kubelet-test-rn5sx deletion completed in 6.097231681s • [SLOW TEST:10.284 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:50:37.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-ec7c38e7-8d2b-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 10:50:37.921: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec7cbcf7-8d2b-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-dtjtj" to be "success or failure" May 3 10:50:37.933: INFO: Pod "pod-projected-secrets-ec7cbcf7-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.426342ms May 3 10:50:39.937: INFO: Pod "pod-projected-secrets-ec7cbcf7-8d2b-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015751641s May 3 10:50:41.942: INFO: Pod "pod-projected-secrets-ec7cbcf7-8d2b-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.02020568s May 3 10:50:43.946: INFO: Pod "pod-projected-secrets-ec7cbcf7-8d2b-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024612209s STEP: Saw pod success May 3 10:50:43.946: INFO: Pod "pod-projected-secrets-ec7cbcf7-8d2b-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:50:43.950: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-ec7cbcf7-8d2b-11ea-b78d-0242ac110017 container secret-volume-test: STEP: delete the pod May 3 10:50:44.003: INFO: Waiting for pod pod-projected-secrets-ec7cbcf7-8d2b-11ea-b78d-0242ac110017 to disappear May 3 10:50:44.008: INFO: Pod pod-projected-secrets-ec7cbcf7-8d2b-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:50:44.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dtjtj" for this suite. May 3 10:50:50.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:50:50.077: INFO: namespace: e2e-tests-projected-dtjtj, resource: bindings, ignored listing per whitelist May 3 10:50:50.132: INFO: namespace e2e-tests-projected-dtjtj deletion completed in 6.120358102s • [SLOW TEST:12.366 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:50:50.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 3 10:50:50.232: INFO: PodSpec: initContainers in spec.initContainers May 3 10:51:39.751: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f3d795e3-8d2b-11ea-b78d-0242ac110017", GenerateName:"", Namespace:"e2e-tests-init-container-dcfpz", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-dcfpz/pods/pod-init-f3d795e3-8d2b-11ea-b78d-0242ac110017", UID:"f3df521f-8d2b-11ea-99e8-0242ac110002", ResourceVersion:"8517162", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724099850, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"232373710"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jkwrw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001729140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jkwrw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jkwrw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jkwrw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001989708), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000d69c20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001989790)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019897b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0019897b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0019897bc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724099850, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724099850, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724099850, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724099850, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.242", StartTime:(*v1.Time)(0xc0009a4c80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c2e7e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c2e850)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0bb606df33d625fc75bddfe728e3d4f39d1d6cad8769c660cfc256d010cd8f8b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0009a4cc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0009a4ca0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:51:39.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-dcfpz" for this suite. May 3 10:52:01.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:52:01.838: INFO: namespace: e2e-tests-init-container-dcfpz, resource: bindings, ignored listing per whitelist May 3 10:52:01.879: INFO: namespace e2e-tests-init-container-dcfpz deletion completed in 22.086141724s • [SLOW TEST:71.746 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:52:01.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 3 10:52:01.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-h45fj' May 3 10:52:02.093: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 3 10:52:02.093: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 3 10:52:04.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-h45fj' May 3 10:52:04.366: INFO: stderr: "" May 3 10:52:04.366: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:52:04.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h45fj" for this suite. May 3 10:52:26.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:52:26.667: INFO: namespace: e2e-tests-kubectl-h45fj, resource: bindings, ignored listing per whitelist May 3 10:52:26.684: INFO: namespace e2e-tests-kubectl-h45fj deletion completed in 22.246258372s • [SLOW TEST:24.804 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:52:26.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-2d628f12-8d2c-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 10:52:26.791: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d6425cb-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-rfzbv" to be "success or failure" May 3 10:52:26.795: INFO: Pod "pod-projected-secrets-2d6425cb-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.363785ms May 3 10:52:28.799: INFO: Pod "pod-projected-secrets-2d6425cb-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007396369s May 3 10:52:31.099: INFO: Pod "pod-projected-secrets-2d6425cb-8d2c-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.307441888s STEP: Saw pod success May 3 10:52:31.099: INFO: Pod "pod-projected-secrets-2d6425cb-8d2c-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:52:31.102: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-2d6425cb-8d2c-11ea-b78d-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 3 10:52:31.250: INFO: Waiting for pod pod-projected-secrets-2d6425cb-8d2c-11ea-b78d-0242ac110017 to disappear May 3 10:52:31.268: INFO: Pod pod-projected-secrets-2d6425cb-8d2c-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:52:31.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rfzbv" for this suite. May 3 10:52:37.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:52:37.382: INFO: namespace: e2e-tests-projected-rfzbv, resource: bindings, ignored listing per whitelist May 3 10:52:37.392: INFO: namespace e2e-tests-projected-rfzbv deletion completed in 6.121120504s • [SLOW TEST:10.708 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:52:37.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-ql4fw [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-ql4fw STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-ql4fw STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-ql4fw STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-ql4fw May 3 10:52:41.597: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ql4fw, name: ss-0, uid: 362a2d40-8d2c-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 3 10:52:51.247: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ql4fw, name: ss-0, uid: 362a2d40-8d2c-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 3 10:52:51.268: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ql4fw, name: ss-0, uid: 362a2d40-8d2c-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 3 10:52:51.292: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-ql4fw STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-ql4fw STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-ql4fw and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 3 10:52:55.451: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ql4fw May 3 10:52:55.454: INFO: Scaling statefulset ss to 0 May 3 10:53:05.478: INFO: Waiting for statefulset status.replicas updated to 0 May 3 10:53:05.481: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:53:05.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-ql4fw" for this suite. May 3 10:53:11.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:53:11.540: INFO: namespace: e2e-tests-statefulset-ql4fw, resource: bindings, ignored listing per whitelist May 3 10:53:11.586: INFO: namespace e2e-tests-statefulset-ql4fw deletion completed in 6.081629408s • [SLOW TEST:34.194 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:53:11.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 3 10:53:11.717: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:53:17.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-n4cst" for this suite. May 3 10:53:23.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:53:23.981: INFO: namespace: e2e-tests-init-container-n4cst, resource: bindings, ignored listing per whitelist May 3 10:53:24.036: INFO: namespace e2e-tests-init-container-n4cst deletion completed in 6.103349876s • [SLOW TEST:12.449 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:53:24.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 3 10:53:24.153: INFO: Waiting up to 5m0s for pod "pod-4f935eb4-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-4vx4j" to be "success or failure" May 3 10:53:24.156: INFO: Pod "pod-4f935eb4-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259298ms May 3 10:53:26.160: INFO: Pod "pod-4f935eb4-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007720136s May 3 10:53:28.165: INFO: Pod "pod-4f935eb4-8d2c-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012415429s STEP: Saw pod success May 3 10:53:28.165: INFO: Pod "pod-4f935eb4-8d2c-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:53:28.168: INFO: Trying to get logs from node hunter-worker2 pod pod-4f935eb4-8d2c-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 10:53:28.342: INFO: Waiting for pod pod-4f935eb4-8d2c-11ea-b78d-0242ac110017 to disappear May 3 10:53:28.366: INFO: Pod pod-4f935eb4-8d2c-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:53:28.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4vx4j" for this suite. May 3 10:53:34.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:53:34.453: INFO: namespace: e2e-tests-emptydir-4vx4j, resource: bindings, ignored listing per whitelist May 3 10:53:34.500: INFO: namespace e2e-tests-emptydir-4vx4j deletion completed in 6.130359886s • [SLOW TEST:10.464 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:53:34.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:53:34.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-n9f7x" for this suite. May 3 10:53:40.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:53:40.729: INFO: namespace: e2e-tests-services-n9f7x, resource: bindings, ignored listing per whitelist May 3 10:53:40.740: INFO: namespace e2e-tests-services-n9f7x deletion completed in 6.106606459s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.240 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:53:40.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 3 10:53:40.866: INFO: Waiting up to 5m0s for pod "pod-598acf0d-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-nzdzm" to be "success or failure" May 3 10:53:40.882: INFO: Pod "pod-598acf0d-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.516828ms May 3 10:53:42.886: INFO: Pod "pod-598acf0d-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019767723s May 3 10:53:44.890: INFO: Pod "pod-598acf0d-8d2c-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023423291s STEP: Saw pod success May 3 10:53:44.890: INFO: Pod "pod-598acf0d-8d2c-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:53:44.892: INFO: Trying to get logs from node hunter-worker pod pod-598acf0d-8d2c-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 10:53:44.918: INFO: Waiting for pod pod-598acf0d-8d2c-11ea-b78d-0242ac110017 to disappear May 3 10:53:44.935: INFO: Pod pod-598acf0d-8d2c-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:53:44.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nzdzm" for this suite. May 3 10:53:50.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:53:51.030: INFO: namespace: e2e-tests-emptydir-nzdzm, resource: bindings, ignored listing per whitelist May 3 10:53:51.036: INFO: namespace e2e-tests-emptydir-nzdzm deletion completed in 6.097831435s • [SLOW TEST:10.296 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:53:51.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 10:53:51.147: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5faacb0c-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-ns8l9" to be "success or failure" May 3 10:53:51.203: INFO: Pod "downwardapi-volume-5faacb0c-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 56.514306ms May 3 10:53:53.285: INFO: Pod "downwardapi-volume-5faacb0c-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138814269s May 3 10:53:55.424: INFO: Pod "downwardapi-volume-5faacb0c-8d2c-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.277145609s STEP: Saw pod success May 3 10:53:55.424: INFO: Pod "downwardapi-volume-5faacb0c-8d2c-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:53:55.428: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5faacb0c-8d2c-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 10:53:55.452: INFO: Waiting for pod downwardapi-volume-5faacb0c-8d2c-11ea-b78d-0242ac110017 to disappear May 3 10:53:55.457: INFO: Pod downwardapi-volume-5faacb0c-8d2c-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:53:55.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ns8l9" for this suite. May 3 10:54:01.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:54:01.548: INFO: namespace: e2e-tests-downward-api-ns8l9, resource: bindings, ignored listing per whitelist May 3 10:54:01.605: INFO: namespace e2e-tests-downward-api-ns8l9 deletion completed in 6.145003249s • [SLOW TEST:10.569 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:54:01.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:54:38.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-zwcxb" for this suite. May 3 10:54:44.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:54:44.086: INFO: namespace: e2e-tests-container-runtime-zwcxb, resource: bindings, ignored listing per whitelist May 3 10:54:44.149: INFO: namespace e2e-tests-container-runtime-zwcxb deletion completed in 6.104034332s • [SLOW TEST:42.543 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:54:44.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 3 10:54:51.154: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7f8332b8-8d2c-11ea-b78d-0242ac110017" May 3 10:54:51.154: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7f8332b8-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-pods-r2lwx" to be "terminated due to deadline exceeded" May 3 10:54:51.175: INFO: Pod "pod-update-activedeadlineseconds-7f8332b8-8d2c-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 21.575155ms May 3 10:54:53.265: INFO: Pod "pod-update-activedeadlineseconds-7f8332b8-8d2c-11ea-b78d-0242ac110017": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.111165584s May 3 10:54:53.265: INFO: Pod "pod-update-activedeadlineseconds-7f8332b8-8d2c-11ea-b78d-0242ac110017" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:54:53.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r2lwx" for this suite. May 3 10:54:59.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:54:59.799: INFO: namespace: e2e-tests-pods-r2lwx, resource: bindings, ignored listing per whitelist May 3 10:54:59.870: INFO: namespace e2e-tests-pods-r2lwx deletion completed in 6.450892583s • [SLOW TEST:15.722 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:54:59.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-ksqs9/secret-test-88b817cc-8d2c-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 10:55:00.024: INFO: Waiting up to 5m0s for pod "pod-configmaps-88b99553-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-secrets-ksqs9" to be "success or failure" May 3 10:55:00.028: INFO: Pod "pod-configmaps-88b99553-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.781642ms May 3 10:55:02.204: INFO: Pod "pod-configmaps-88b99553-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17995876s May 3 10:55:04.207: INFO: Pod "pod-configmaps-88b99553-8d2c-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183071774s STEP: Saw pod success May 3 10:55:04.207: INFO: Pod "pod-configmaps-88b99553-8d2c-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:55:04.255: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-88b99553-8d2c-11ea-b78d-0242ac110017 container env-test: STEP: delete the pod May 3 10:55:04.305: INFO: Waiting for pod pod-configmaps-88b99553-8d2c-11ea-b78d-0242ac110017 to disappear May 3 10:55:04.339: INFO: Pod pod-configmaps-88b99553-8d2c-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:55:04.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ksqs9" for this suite. May 3 10:55:10.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:55:10.439: INFO: namespace: e2e-tests-secrets-ksqs9, resource: bindings, ignored listing per whitelist May 3 10:55:10.454: INFO: namespace e2e-tests-secrets-ksqs9 deletion completed in 6.110456286s • [SLOW TEST:10.584 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:55:10.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-8f1e3ec8-8d2c-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 10:55:10.899: INFO: Waiting up to 5m0s for pod "pod-secrets-8f352b29-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-secrets-b22cm" to be "success or failure" May 3 10:55:11.145: INFO: Pod "pod-secrets-8f352b29-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 246.243883ms May 3 10:55:13.263: INFO: Pod "pod-secrets-8f352b29-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36412694s May 3 10:55:15.267: INFO: Pod "pod-secrets-8f352b29-8d2c-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.367517292s STEP: Saw pod success May 3 10:55:15.267: INFO: Pod "pod-secrets-8f352b29-8d2c-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:55:15.269: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-8f352b29-8d2c-11ea-b78d-0242ac110017 container secret-volume-test: STEP: delete the pod May 3 10:55:15.311: INFO: Waiting for pod pod-secrets-8f352b29-8d2c-11ea-b78d-0242ac110017 to disappear May 3 10:55:15.326: INFO: Pod pod-secrets-8f352b29-8d2c-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:55:15.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-b22cm" for this suite. May 3 10:55:21.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:55:21.388: INFO: namespace: e2e-tests-secrets-b22cm, resource: bindings, ignored listing per whitelist May 3 10:55:21.420: INFO: namespace e2e-tests-secrets-b22cm deletion completed in 6.090206653s • [SLOW TEST:10.965 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:55:21.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 3 10:55:21.573: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:55:31.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9hsdp" for this suite. May 3 10:55:55.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:55:55.889: INFO: namespace: e2e-tests-init-container-9hsdp, resource: bindings, ignored listing per whitelist May 3 10:55:55.910: INFO: namespace e2e-tests-init-container-9hsdp deletion completed in 24.114667242s • [SLOW TEST:34.490 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:55:55.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-aa192dff-8d2c-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 10:55:56.026: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aa19bc2f-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-2f7mx" to be "success or failure" May 3 10:55:56.090: INFO: Pod "pod-projected-configmaps-aa19bc2f-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 63.66955ms May 3 10:55:58.094: INFO: Pod "pod-projected-configmaps-aa19bc2f-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068015536s May 3 10:56:00.108: INFO: Pod "pod-projected-configmaps-aa19bc2f-8d2c-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081558777s STEP: Saw pod success May 3 10:56:00.108: INFO: Pod "pod-projected-configmaps-aa19bc2f-8d2c-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:56:00.110: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-aa19bc2f-8d2c-11ea-b78d-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 3 10:56:00.165: INFO: Waiting for pod pod-projected-configmaps-aa19bc2f-8d2c-11ea-b78d-0242ac110017 to disappear May 3 10:56:00.179: INFO: Pod pod-projected-configmaps-aa19bc2f-8d2c-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:56:00.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2f7mx" for this suite. May 3 10:56:06.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:56:06.206: INFO: namespace: e2e-tests-projected-2f7mx, resource: bindings, ignored listing per whitelist May 3 10:56:06.270: INFO: namespace e2e-tests-projected-2f7mx deletion completed in 6.087242314s • [SLOW TEST:10.360 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:56:06.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b04dff13-8d2c-11ea-b78d-0242ac110017 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b04dff13-8d2c-11ea-b78d-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:56:12.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c52qk" for this suite. May 3 10:56:34.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:56:34.586: INFO: namespace: e2e-tests-projected-c52qk, resource: bindings, ignored listing per whitelist May 3 10:56:34.590: INFO: namespace e2e-tests-projected-c52qk deletion completed in 22.089770895s • [SLOW TEST:28.319 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:56:34.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 10:56:34.949: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1455ca9-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-r4spp" to be "success or failure" May 3 10:56:34.959: INFO: Pod "downwardapi-volume-c1455ca9-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.184901ms May 3 10:56:36.995: INFO: Pod "downwardapi-volume-c1455ca9-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045418367s May 3 10:56:38.998: INFO: Pod "downwardapi-volume-c1455ca9-8d2c-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.048866245s May 3 10:56:41.003: INFO: Pod "downwardapi-volume-c1455ca9-8d2c-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053430771s STEP: Saw pod success May 3 10:56:41.003: INFO: Pod "downwardapi-volume-c1455ca9-8d2c-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:56:41.006: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c1455ca9-8d2c-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 10:56:41.056: INFO: Waiting for pod downwardapi-volume-c1455ca9-8d2c-11ea-b78d-0242ac110017 to disappear May 3 10:56:41.073: INFO: Pod downwardapi-volume-c1455ca9-8d2c-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:56:41.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r4spp" for this suite. May 3 10:56:47.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:56:47.202: INFO: namespace: e2e-tests-projected-r4spp, resource: bindings, ignored listing per whitelist May 3 10:56:47.207: INFO: namespace e2e-tests-projected-r4spp deletion completed in 6.103439048s • [SLOW TEST:12.616 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:56:47.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 3 10:56:47.327: INFO: Waiting up to 5m0s for pod "pod-c8aee097-8d2c-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-lhwcz" to be "success or failure" May 3 10:56:47.343: INFO: Pod "pod-c8aee097-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.288968ms May 3 10:56:49.396: INFO: Pod "pod-c8aee097-8d2c-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069409313s May 3 10:56:51.399: INFO: Pod "pod-c8aee097-8d2c-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072702761s STEP: Saw pod success May 3 10:56:51.399: INFO: Pod "pod-c8aee097-8d2c-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 10:56:51.401: INFO: Trying to get logs from node hunter-worker pod pod-c8aee097-8d2c-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 10:56:51.419: INFO: Waiting for pod pod-c8aee097-8d2c-11ea-b78d-0242ac110017 to disappear May 3 10:56:51.423: INFO: Pod pod-c8aee097-8d2c-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 10:56:51.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lhwcz" for this suite. May 3 10:56:57.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 10:56:57.502: INFO: namespace: e2e-tests-emptydir-lhwcz, resource: bindings, ignored listing per whitelist May 3 10:56:57.520: INFO: namespace e2e-tests-emptydir-lhwcz deletion completed in 6.09362401s • [SLOW TEST:10.313 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 10:56:57.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-j2nl7 May 3 10:57:01.688: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-j2nl7 STEP: checking the pod's current state and verifying that restartCount is present May 3 10:57:01.692: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:01:02.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-j2nl7" for this suite. May 3 11:01:08.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:01:08.804: INFO: namespace: e2e-tests-container-probe-j2nl7, resource: bindings, ignored listing per whitelist May 3 11:01:08.838: INFO: namespace e2e-tests-container-probe-j2nl7 deletion completed in 6.091269295s • [SLOW TEST:251.318 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:01:08.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 11:01:08.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64a1f2cb-8d2d-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-trtzn" to be "success or failure" May 3 11:01:08.980: INFO: Pod "downwardapi-volume-64a1f2cb-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.737357ms May 3 11:01:10.984: INFO: Pod "downwardapi-volume-64a1f2cb-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007606466s May 3 11:01:12.988: INFO: Pod "downwardapi-volume-64a1f2cb-8d2d-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011217003s STEP: Saw pod success May 3 11:01:12.988: INFO: Pod "downwardapi-volume-64a1f2cb-8d2d-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:01:12.991: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-64a1f2cb-8d2d-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 11:01:13.144: INFO: Waiting for pod downwardapi-volume-64a1f2cb-8d2d-11ea-b78d-0242ac110017 to disappear May 3 11:01:13.178: INFO: Pod downwardapi-volume-64a1f2cb-8d2d-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:01:13.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-trtzn" for this suite. May 3 11:01:19.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:01:19.263: INFO: namespace: e2e-tests-projected-trtzn, resource: bindings, ignored listing per whitelist May 3 11:01:19.297: INFO: namespace e2e-tests-projected-trtzn deletion completed in 6.114697677s • [SLOW TEST:10.459 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:01:19.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-6ad884a2-8d2d-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 11:01:19.434: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6adf8670-8d2d-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-7x2zk" to be "success or failure" May 3 11:01:19.460: INFO: Pod "pod-projected-secrets-6adf8670-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 26.076768ms May 3 11:01:21.464: INFO: Pod "pod-projected-secrets-6adf8670-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029799514s May 3 11:01:23.467: INFO: Pod "pod-projected-secrets-6adf8670-8d2d-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033474479s STEP: Saw pod success May 3 11:01:23.467: INFO: Pod "pod-projected-secrets-6adf8670-8d2d-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:01:23.495: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-6adf8670-8d2d-11ea-b78d-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 3 11:01:23.558: INFO: Waiting for pod pod-projected-secrets-6adf8670-8d2d-11ea-b78d-0242ac110017 to disappear May 3 11:01:23.572: INFO: Pod pod-projected-secrets-6adf8670-8d2d-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:01:23.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7x2zk" for this suite. May 3 11:01:29.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:01:29.603: INFO: namespace: e2e-tests-projected-7x2zk, resource: bindings, ignored listing per whitelist May 3 11:01:29.661: INFO: namespace e2e-tests-projected-7x2zk deletion completed in 6.085293171s • [SLOW TEST:10.364 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:01:29.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-71112759-8d2d-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 11:01:29.874: INFO: Waiting up to 5m0s for pod "pod-secrets-71126472-8d2d-11ea-b78d-0242ac110017" in namespace "e2e-tests-secrets-kvn69" to be "success or failure" May 3 11:01:29.889: INFO: Pod "pod-secrets-71126472-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 14.787521ms May 3 11:01:31.893: INFO: Pod "pod-secrets-71126472-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018923391s May 3 11:01:33.897: INFO: Pod "pod-secrets-71126472-8d2d-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023004147s STEP: Saw pod success May 3 11:01:33.897: INFO: Pod "pod-secrets-71126472-8d2d-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:01:33.900: INFO: Trying to get logs from node hunter-worker pod pod-secrets-71126472-8d2d-11ea-b78d-0242ac110017 container secret-volume-test: STEP: delete the pod May 3 11:01:33.922: INFO: Waiting for pod pod-secrets-71126472-8d2d-11ea-b78d-0242ac110017 to disappear May 3 11:01:33.955: INFO: Pod pod-secrets-71126472-8d2d-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:01:33.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kvn69" for this suite. May 3 11:01:39.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:01:39.991: INFO: namespace: e2e-tests-secrets-kvn69, resource: bindings, ignored listing per whitelist May 3 11:01:40.059: INFO: namespace e2e-tests-secrets-kvn69 deletion completed in 6.099876946s • [SLOW TEST:10.397 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:01:40.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:01:40.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-7d26l" for this suite. May 3 11:01:46.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:01:46.416: INFO: namespace: e2e-tests-kubelet-test-7d26l, resource: bindings, ignored listing per whitelist May 3 11:01:46.433: INFO: namespace e2e-tests-kubelet-test-7d26l deletion completed in 6.132711023s • [SLOW TEST:6.374 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:01:46.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 3 11:01:46.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:46.622: INFO: Number of nodes with available pods: 0 May 3 11:01:46.623: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:47.628: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:47.632: INFO: Number of nodes with available pods: 0 May 3 11:01:47.632: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:48.628: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:48.632: INFO: Number of nodes with available pods: 0 May 3 11:01:48.632: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:49.755: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:49.759: INFO: Number of nodes with available pods: 0 May 3 11:01:49.759: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:50.643: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:50.647: INFO: Number of nodes with available pods: 1 May 3 11:01:50.647: INFO: Node hunter-worker2 is running more than one daemon pod May 3 11:01:51.627: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:51.631: INFO: Number of nodes with available pods: 2 May 3 11:01:51.631: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 3 11:01:51.646: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:51.649: INFO: Number of nodes with available pods: 1 May 3 11:01:51.649: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:52.654: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:52.656: INFO: Number of nodes with available pods: 1 May 3 11:01:52.656: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:53.655: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:53.659: INFO: Number of nodes with available pods: 1 May 3 11:01:53.659: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:54.655: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:54.659: INFO: Number of nodes with available pods: 1 May 3 11:01:54.659: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:55.654: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:55.658: INFO: Number of nodes with available pods: 1 May 3 11:01:55.658: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:56.654: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:56.657: INFO: Number of nodes with available pods: 1 May 3 11:01:56.657: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:57.654: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:57.676: INFO: Number of nodes with available pods: 1 May 3 11:01:57.676: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:58.652: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:58.655: INFO: Number of nodes with available pods: 1 May 3 11:01:58.655: INFO: Node hunter-worker is running more than one daemon pod May 3 11:01:59.683: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:01:59.687: INFO: Number of nodes with available pods: 1 May 3 11:01:59.687: INFO: Node hunter-worker is running more than one daemon pod May 3 11:02:00.737: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:02:00.741: INFO: Number of nodes with available pods: 1 May 3 11:02:00.741: INFO: Node hunter-worker is running more than one daemon pod May 3 11:02:01.749: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:02:01.752: INFO: Number of nodes with available pods: 1 May 3 11:02:01.752: INFO: Node hunter-worker is running more than one daemon pod May 3 11:02:02.689: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:02:02.691: INFO: Number of nodes with available pods: 1 May 3 11:02:02.691: INFO: Node hunter-worker is running more than one daemon pod May 3 11:02:03.659: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:02:03.716: INFO: Number of nodes with available pods: 1 May 3 11:02:03.716: INFO: Node hunter-worker is running more than one daemon pod May 3 11:02:04.654: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:02:04.658: INFO: Number of nodes with available pods: 2 May 3 11:02:04.658: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-v8wwx, will wait for the garbage collector to delete the pods May 3 11:02:04.720: INFO: Deleting DaemonSet.extensions daemon-set took: 6.6ms May 3 11:02:05.121: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.381103ms May 3 11:02:09.524: INFO: Number of nodes with available pods: 0 May 3 11:02:09.524: INFO: Number of running nodes: 0, number of available pods: 0 May 3 11:02:09.530: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-v8wwx/daemonsets","resourceVersion":"8519126"},"items":null} May 3 11:02:09.532: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-v8wwx/pods","resourceVersion":"8519126"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:02:09.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-v8wwx" for this suite. May 3 11:02:15.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:02:15.659: INFO: namespace: e2e-tests-daemonsets-v8wwx, resource: bindings, ignored listing per whitelist May 3 11:02:15.741: INFO: namespace e2e-tests-daemonsets-v8wwx deletion completed in 6.196917904s • [SLOW TEST:29.308 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:02:15.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 3 11:02:16.333: INFO: Waiting up to 5m0s for pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr" in namespace "e2e-tests-svcaccounts-vx2qm" to be "success or failure" May 3 11:02:16.335: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231302ms May 3 11:02:18.339: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006374853s May 3 11:02:20.343: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009982558s May 3 11:02:22.347: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr": Phase="Running", Reason="", readiness=false. Elapsed: 6.014131722s May 3 11:02:24.350: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01710321s STEP: Saw pod success May 3 11:02:24.350: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr" satisfied condition "success or failure" May 3 11:02:24.352: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr container token-test: STEP: delete the pod May 3 11:02:24.384: INFO: Waiting for pod pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr to disappear May 3 11:02:24.401: INFO: Pod pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-l84xr no longer exists STEP: Creating a pod to test consume service account root CA May 3 11:02:24.404: INFO: Waiting up to 5m0s for pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm" in namespace "e2e-tests-svcaccounts-vx2qm" to be "success or failure" May 3 11:02:24.473: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm": Phase="Pending", Reason="", readiness=false. Elapsed: 68.979913ms May 3 11:02:26.498: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093620116s May 3 11:02:28.502: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097622143s May 3 11:02:30.506: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm": Phase="Running", Reason="", readiness=false. Elapsed: 6.101545763s May 3 11:02:32.510: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105743611s STEP: Saw pod success May 3 11:02:32.510: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm" satisfied condition "success or failure" May 3 11:02:32.513: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm container root-ca-test: STEP: delete the pod May 3 11:02:32.554: INFO: Waiting for pod pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm to disappear May 3 11:02:32.569: INFO: Pod pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-msbhm no longer exists STEP: Creating a pod to test consume service account namespace May 3 11:02:32.572: INFO: Waiting up to 5m0s for pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s" in namespace "e2e-tests-svcaccounts-vx2qm" to be "success or failure" May 3 11:02:32.575: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875593ms May 3 11:02:34.579: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006983987s May 3 11:02:36.910: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338097997s May 3 11:02:38.915: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342871207s May 3 11:02:40.920: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.347712762s STEP: Saw pod success May 3 11:02:40.920: INFO: Pod "pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s" satisfied condition "success or failure" May 3 11:02:40.923: INFO: Trying to get logs from node hunter-worker pod pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s container namespace-test: STEP: delete the pod May 3 11:02:41.259: INFO: Waiting for pod pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s to disappear May 3 11:02:41.274: INFO: Pod pod-service-account-8cc965c2-8d2d-11ea-b78d-0242ac110017-wsm9s no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:02:41.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-vx2qm" for this suite. May 3 11:02:47.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:02:47.344: INFO: namespace: e2e-tests-svcaccounts-vx2qm, resource: bindings, ignored listing per whitelist May 3 11:02:47.402: INFO: namespace e2e-tests-svcaccounts-vx2qm deletion completed in 6.124369092s • [SLOW TEST:31.660 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:02:47.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 3 11:02:47.516: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 3 11:02:47.558: INFO: Waiting for terminating namespaces to be deleted... May 3 11:02:47.560: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 3 11:02:47.565: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 3 11:02:47.565: INFO: Container kube-proxy ready: true, restart count 0 May 3 11:02:47.565: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 3 11:02:47.565: INFO: Container kindnet-cni ready: true, restart count 0 May 3 11:02:47.565: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 3 11:02:47.565: INFO: Container coredns ready: true, restart count 0 May 3 11:02:47.565: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 3 11:02:47.570: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 3 11:02:47.570: INFO: Container kindnet-cni ready: true, restart count 0 May 3 11:02:47.570: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 3 11:02:47.570: INFO: Container coredns ready: true, restart count 0 May 3 11:02:47.570: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 3 11:02:47.570: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a1d80d6e-8d2d-11ea-b78d-0242ac110017 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-a1d80d6e-8d2d-11ea-b78d-0242ac110017 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-a1d80d6e-8d2d-11ea-b78d-0242ac110017 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:02:55.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-fzzvl" for this suite. May 3 11:03:15.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:03:15.803: INFO: namespace: e2e-tests-sched-pred-fzzvl, resource: bindings, ignored listing per whitelist May 3 11:03:15.820: INFO: namespace e2e-tests-sched-pred-fzzvl deletion completed in 20.080491129s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:28.418 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:03:15.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 11:03:15.960: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b053bf1c-8d2d-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-42s87" to be "success or failure" May 3 11:03:15.982: INFO: Pod "downwardapi-volume-b053bf1c-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.230881ms May 3 11:03:17.986: INFO: Pod "downwardapi-volume-b053bf1c-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025280572s May 3 11:03:20.066: INFO: Pod "downwardapi-volume-b053bf1c-8d2d-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105722461s STEP: Saw pod success May 3 11:03:20.066: INFO: Pod "downwardapi-volume-b053bf1c-8d2d-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:03:20.069: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b053bf1c-8d2d-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 11:03:20.086: INFO: Waiting for pod downwardapi-volume-b053bf1c-8d2d-11ea-b78d-0242ac110017 to disappear May 3 11:03:20.091: INFO: Pod downwardapi-volume-b053bf1c-8d2d-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:03:20.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-42s87" for this suite. May 3 11:03:26.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:03:26.179: INFO: namespace: e2e-tests-projected-42s87, resource: bindings, ignored listing per whitelist May 3 11:03:26.186: INFO: namespace e2e-tests-projected-42s87 deletion completed in 6.09153451s • [SLOW TEST:10.365 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:03:26.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-b6817962-8d2d-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:03:26.347: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6823b72-8d2d-11ea-b78d-0242ac110017" in namespace "e2e-tests-configmap-6x69x" to be "success or failure" May 3 11:03:26.355: INFO: Pod "pod-configmaps-b6823b72-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.211108ms May 3 11:03:28.359: INFO: Pod "pod-configmaps-b6823b72-8d2d-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011079406s May 3 11:03:30.432: INFO: Pod "pod-configmaps-b6823b72-8d2d-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.084174286s May 3 11:03:32.436: INFO: Pod "pod-configmaps-b6823b72-8d2d-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088549748s STEP: Saw pod success May 3 11:03:32.436: INFO: Pod "pod-configmaps-b6823b72-8d2d-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:03:32.442: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-b6823b72-8d2d-11ea-b78d-0242ac110017 container configmap-volume-test: STEP: delete the pod May 3 11:03:32.542: INFO: Waiting for pod pod-configmaps-b6823b72-8d2d-11ea-b78d-0242ac110017 to disappear May 3 11:03:32.563: INFO: Pod pod-configmaps-b6823b72-8d2d-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:03:32.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6x69x" for this suite. May 3 11:03:38.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:03:39.218: INFO: namespace: e2e-tests-configmap-6x69x, resource: bindings, ignored listing per whitelist May 3 11:03:39.227: INFO: namespace e2e-tests-configmap-6x69x deletion completed in 6.658639299s • [SLOW TEST:13.041 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:03:39.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 3 11:03:39.666: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-a,UID:be757fea-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519509,Generation:0,CreationTimestamp:2020-05-03 11:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 3 11:03:39.666: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-a,UID:be757fea-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519509,Generation:0,CreationTimestamp:2020-05-03 11:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 3 11:03:49.674: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-a,UID:be757fea-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519529,Generation:0,CreationTimestamp:2020-05-03 11:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 3 11:03:49.674: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-a,UID:be757fea-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519529,Generation:0,CreationTimestamp:2020-05-03 11:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 3 11:03:59.683: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-a,UID:be757fea-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519549,Generation:0,CreationTimestamp:2020-05-03 11:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 3 11:03:59.683: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-a,UID:be757fea-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519549,Generation:0,CreationTimestamp:2020-05-03 11:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 3 11:04:09.690: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-a,UID:be757fea-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519569,Generation:0,CreationTimestamp:2020-05-03 11:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 3 11:04:09.690: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-a,UID:be757fea-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519569,Generation:0,CreationTimestamp:2020-05-03 11:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 3 11:04:19.698: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-b,UID:d6516844-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519589,Generation:0,CreationTimestamp:2020-05-03 11:04:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 3 11:04:19.698: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-b,UID:d6516844-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519589,Generation:0,CreationTimestamp:2020-05-03 11:04:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 3 11:04:29.705: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-b,UID:d6516844-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519609,Generation:0,CreationTimestamp:2020-05-03 11:04:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 3 11:04:29.705: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q4m5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-q4m5x/configmaps/e2e-watch-test-configmap-b,UID:d6516844-8d2d-11ea-99e8-0242ac110002,ResourceVersion:8519609,Generation:0,CreationTimestamp:2020-05-03 11:04:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:04:39.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-q4m5x" for this suite. May 3 11:04:45.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:04:45.805: INFO: namespace: e2e-tests-watch-q4m5x, resource: bindings, ignored listing per whitelist May 3 11:04:45.818: INFO: namespace e2e-tests-watch-q4m5x deletion completed in 6.106810047s • [SLOW TEST:66.591 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:04:45.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 3 11:04:54.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:04:54.074: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:04:56.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:04:56.078: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:04:58.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:04:58.078: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:00.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:00.077: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:02.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:02.078: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:04.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:04.230: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:06.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:06.152: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:08.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:08.092: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:10.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:10.079: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:12.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:12.079: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:14.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:14.086: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:16.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:16.082: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:18.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:18.078: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:20.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:20.077: INFO: Pod pod-with-prestop-exec-hook still exists May 3 11:05:22.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 3 11:05:22.076: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:05:22.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vxfmr" for this suite. May 3 11:05:46.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:05:46.109: INFO: namespace: e2e-tests-container-lifecycle-hook-vxfmr, resource: bindings, ignored listing per whitelist May 3 11:05:46.169: INFO: namespace e2e-tests-container-lifecycle-hook-vxfmr deletion completed in 24.082138586s • [SLOW TEST:60.350 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:05:46.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 3 11:05:46.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-6nxnb' May 3 11:05:49.003: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 3 11:05:49.003: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 3 11:05:49.040: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 3 11:05:49.320: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 3 11:05:49.511: INFO: scanned /root for discovery docs: May 3 11:05:49.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-6nxnb' May 3 11:06:06.907: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 3 11:06:06.907: INFO: stdout: "Created e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679\nScaling up e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 3 11:06:06.907: INFO: stdout: "Created e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679\nScaling up e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 3 11:06:06.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6nxnb' May 3 11:06:07.016: INFO: stderr: "" May 3 11:06:07.016: INFO: stdout: "e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679-rc75j " May 3 11:06:07.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679-rc75j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6nxnb' May 3 11:06:07.252: INFO: stderr: "" May 3 11:06:07.252: INFO: stdout: "true" May 3 11:06:07.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679-rc75j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6nxnb' May 3 11:06:07.344: INFO: stderr: "" May 3 11:06:07.344: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 3 11:06:07.344: INFO: e2e-test-nginx-rc-22966c2065722e7b507c05a24c0ba679-rc75j is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 3 11:06:07.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6nxnb' May 3 11:06:07.463: INFO: stderr: "" May 3 11:06:07.463: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:06:07.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6nxnb" for this suite. May 3 11:06:13.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:06:13.611: INFO: namespace: e2e-tests-kubectl-6nxnb, resource: bindings, ignored listing per whitelist May 3 11:06:13.658: INFO: namespace e2e-tests-kubectl-6nxnb deletion completed in 6.191503813s • [SLOW TEST:27.489 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:06:13.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:06:13.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 3 11:06:13.858: INFO: stderr: "" May 3 11:06:13.858: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 3 11:06:13.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jr648' May 3 11:06:14.140: INFO: stderr: "" May 3 11:06:14.140: INFO: stdout: "replicationcontroller/redis-master created\n" May 3 11:06:14.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jr648' May 3 11:06:14.424: INFO: stderr: "" May 3 11:06:14.424: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 3 11:06:15.428: INFO: Selector matched 1 pods for map[app:redis] May 3 11:06:15.428: INFO: Found 0 / 1 May 3 11:06:16.635: INFO: Selector matched 1 pods for map[app:redis] May 3 11:06:16.635: INFO: Found 0 / 1 May 3 11:06:17.427: INFO: Selector matched 1 pods for map[app:redis] May 3 11:06:17.427: INFO: Found 0 / 1 May 3 11:06:18.428: INFO: Selector matched 1 pods for map[app:redis] May 3 11:06:18.428: INFO: Found 0 / 1 May 3 11:06:19.428: INFO: Selector matched 1 pods for map[app:redis] May 3 11:06:19.428: INFO: Found 0 / 1 May 3 11:06:20.427: INFO: Selector matched 1 pods for map[app:redis] May 3 11:06:20.427: INFO: Found 1 / 1 May 3 11:06:20.427: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 3 11:06:20.431: INFO: Selector matched 1 pods for map[app:redis] May 3 11:06:20.431: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 3 11:06:20.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-5vwt6 --namespace=e2e-tests-kubectl-jr648' May 3 11:06:20.546: INFO: stderr: "" May 3 11:06:20.546: INFO: stdout: "Name: redis-master-5vwt6\nNamespace: e2e-tests-kubectl-jr648\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Sun, 03 May 2020 11:06:14 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.5\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://423812ccf264b4d878c5d49be7ffae950f7bef6941a1a2770cac9bbdec8bfaa2\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 03 May 2020 11:06:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-7k8r9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-7k8r9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-7k8r9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned e2e-tests-kubectl-jr648/redis-master-5vwt6 to hunter-worker2\n Normal Pulled 4s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 2s kubelet, hunter-worker2 Started container\n" May 3 11:06:20.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-jr648' May 3 11:06:20.679: INFO: stderr: "" May 3 11:06:20.679: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-jr648\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: redis-master-5vwt6\n" May 3 11:06:20.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-jr648' May 3 11:06:20.792: INFO: stderr: "" May 3 11:06:20.792: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-jr648\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.111.91\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.5:6379\nSession Affinity: None\nEvents: \n" May 3 11:06:20.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 3 11:06:20.927: INFO: stderr: "" May 3 11:06:20.927: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 03 May 2020 11:06:11 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 03 May 2020 11:06:11 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 03 May 2020 11:06:11 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 03 May 2020 11:06:11 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 48d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 48d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 48d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 48d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 48d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 48d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 48d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 3 11:06:20.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-jr648' May 3 11:06:21.039: INFO: stderr: "" May 3 11:06:21.039: INFO: stdout: "Name: e2e-tests-kubectl-jr648\nLabels: e2e-framework=kubectl\n e2e-run=60a214d1-8d2b-11ea-b78d-0242ac110017\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:06:21.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jr648" for this suite. May 3 11:06:45.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:06:45.247: INFO: namespace: e2e-tests-kubectl-jr648, resource: bindings, ignored listing per whitelist May 3 11:06:45.304: INFO: namespace e2e-tests-kubectl-jr648 deletion completed in 24.260819565s • [SLOW TEST:31.645 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:06:45.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 11:06:45.479: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-pxlpp" to be "success or failure" May 3 11:06:45.492: INFO: Pod "downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.136144ms May 3 11:06:47.531: INFO: Pod "downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052082306s May 3 11:06:49.535: INFO: Pod "downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056586646s May 3 11:06:51.765: INFO: Pod "downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.285966534s May 3 11:06:54.215: INFO: Pod "downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.736105401s May 3 11:06:56.507: INFO: Pod "downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.028687402s STEP: Saw pod success May 3 11:06:56.507: INFO: Pod "downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:06:56.510: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 11:06:56.847: INFO: Waiting for pod downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017 to disappear May 3 11:06:57.237: INFO: Pod downwardapi-volume-2d2df76e-8d2e-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:06:57.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pxlpp" for this suite. May 3 11:07:04.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:07:04.136: INFO: namespace: e2e-tests-projected-pxlpp, resource: bindings, ignored listing per whitelist May 3 11:07:04.141: INFO: namespace e2e-tests-projected-pxlpp deletion completed in 6.900433749s • [SLOW TEST:18.837 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:07:04.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0503 11:07:16.375007 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 3 11:07:16.375: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:07:16.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zvchx" for this suite. May 3 11:07:24.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:07:24.615: INFO: namespace: e2e-tests-gc-zvchx, resource: bindings, ignored listing per whitelist May 3 11:07:24.666: INFO: namespace e2e-tests-gc-zvchx deletion completed in 8.287529544s • [SLOW TEST:20.525 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:07:24.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 3 11:07:25.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-v4s4m' May 3 11:07:25.396: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 3 11:07:25.396: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 3 11:07:25.670: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-5mv2j] May 3 11:07:25.670: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-5mv2j" in namespace "e2e-tests-kubectl-v4s4m" to be "running and ready" May 3 11:07:25.672: INFO: Pod "e2e-test-nginx-rc-5mv2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018726ms May 3 11:07:27.676: INFO: Pod "e2e-test-nginx-rc-5mv2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006263412s May 3 11:07:29.680: INFO: Pod "e2e-test-nginx-rc-5mv2j": Phase="Running", Reason="", readiness=true. Elapsed: 4.010459517s May 3 11:07:29.680: INFO: Pod "e2e-test-nginx-rc-5mv2j" satisfied condition "running and ready" May 3 11:07:29.680: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-5mv2j] May 3 11:07:29.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-v4s4m' May 3 11:07:29.841: INFO: stderr: "" May 3 11:07:29.841: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 3 11:07:29.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-v4s4m' May 3 11:07:29.970: INFO: stderr: "" May 3 11:07:29.970: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:07:29.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v4s4m" for this suite. May 3 11:07:35.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:07:36.015: INFO: namespace: e2e-tests-kubectl-v4s4m, resource: bindings, ignored listing per whitelist May 3 11:07:36.064: INFO: namespace e2e-tests-kubectl-v4s4m deletion completed in 6.089483456s • [SLOW TEST:11.398 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:07:36.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:07:36.504: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 3 11:07:41.508: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 3 11:07:41.508: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 3 11:07:43.512: INFO: Creating deployment "test-rollover-deployment" May 3 11:07:43.529: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 3 11:07:45.534: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 3 11:07:45.540: INFO: Ensure that both replica sets have 1 created replica May 3 11:07:45.545: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 3 11:07:45.551: INFO: Updating deployment test-rollover-deployment May 3 11:07:45.551: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 3 11:07:47.643: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 3 11:07:47.649: INFO: Make sure deployment "test-rollover-deployment" is complete May 3 11:07:47.655: INFO: all replica sets need to contain the pod-template-hash label May 3 11:07:47.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100865, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 3 11:07:49.661: INFO: all replica sets need to contain the pod-template-hash label May 3 11:07:49.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100865, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 3 11:07:51.662: INFO: all replica sets need to contain the pod-template-hash label May 3 11:07:51.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100869, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 3 11:07:53.662: INFO: all replica sets need to contain the pod-template-hash label May 3 11:07:53.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100869, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 3 11:07:55.662: INFO: all replica sets need to contain the pod-template-hash label May 3 11:07:55.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100869, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 3 11:07:57.664: INFO: all replica sets need to contain the pod-template-hash label May 3 11:07:57.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100869, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 3 11:07:59.664: INFO: all replica sets need to contain the pod-template-hash label May 3 11:07:59.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100869, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724100863, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 3 11:08:01.663: INFO: May 3 11:08:01.663: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 3 11:08:01.670: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-cnh24,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cnh24/deployments/test-rollover-deployment,UID:4fce220a-8d2e-11ea-99e8-0242ac110002,ResourceVersion:8520492,Generation:2,CreationTimestamp:2020-05-03 11:07:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-03 11:07:43 +0000 UTC 2020-05-03 11:07:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-03 11:08:00 +0000 UTC 2020-05-03 11:07:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 3 11:08:01.674: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-cnh24,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cnh24/replicasets/test-rollover-deployment-5b8479fdb6,UID:51052eec-8d2e-11ea-99e8-0242ac110002,ResourceVersion:8520482,Generation:2,CreationTimestamp:2020-05-03 11:07:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4fce220a-8d2e-11ea-99e8-0242ac110002 0xc0020e2ba7 0xc0020e2ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 3 11:08:01.674: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 3 11:08:01.674: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-cnh24,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cnh24/replicasets/test-rollover-controller,UID:4b9b9e85-8d2e-11ea-99e8-0242ac110002,ResourceVersion:8520491,Generation:2,CreationTimestamp:2020-05-03 11:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4fce220a-8d2e-11ea-99e8-0242ac110002 0xc0020e215f 0xc0020e2170}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 3 11:08:01.674: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-cnh24,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cnh24/replicasets/test-rollover-deployment-58494b7559,UID:4fd1bd98-8d2e-11ea-99e8-0242ac110002,ResourceVersion:8520446,Generation:2,CreationTimestamp:2020-05-03 11:07:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4fce220a-8d2e-11ea-99e8-0242ac110002 0xc0020e2887 0xc0020e2888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 3 11:08:01.678: INFO: Pod "test-rollover-deployment-5b8479fdb6-n2b72" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-n2b72,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-cnh24,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnh24/pods/test-rollover-deployment-5b8479fdb6-n2b72,UID:5128704d-8d2e-11ea-99e8-0242ac110002,ResourceVersion:8520460,Generation:0,CreationTimestamp:2020-05-03 11:07:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 51052eec-8d2e-11ea-99e8-0242ac110002 0xc001737af7 0xc001737af8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8qglz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8qglz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8qglz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001737b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001737b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:07:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:07:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:07:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:07:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.19,StartTime:2020-05-03 11:07:45 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-03 11:07:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://233d4f3f1528ba0e923ae17b8dcaebee53afa8591f0dde74d3d8749d77e687f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:08:01.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-cnh24" for this suite. May 3 11:08:09.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:08:09.719: INFO: namespace: e2e-tests-deployment-cnh24, resource: bindings, ignored listing per whitelist May 3 11:08:09.774: INFO: namespace e2e-tests-deployment-cnh24 deletion completed in 8.092733117s • [SLOW TEST:33.710 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:08:09.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 3 11:08:09.916: INFO: Waiting up to 5m0s for pod "pod-5f89bb43-8d2e-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-26wxd" to be "success or failure" May 3 11:08:09.936: INFO: Pod "pod-5f89bb43-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.789694ms May 3 11:08:11.963: INFO: Pod "pod-5f89bb43-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047319904s May 3 11:08:13.967: INFO: Pod "pod-5f89bb43-8d2e-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051547968s STEP: Saw pod success May 3 11:08:13.967: INFO: Pod "pod-5f89bb43-8d2e-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:08:13.970: INFO: Trying to get logs from node hunter-worker2 pod pod-5f89bb43-8d2e-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 11:08:14.010: INFO: Waiting for pod pod-5f89bb43-8d2e-11ea-b78d-0242ac110017 to disappear May 3 11:08:14.027: INFO: Pod pod-5f89bb43-8d2e-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:08:14.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-26wxd" for this suite. May 3 11:08:20.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:08:20.060: INFO: namespace: e2e-tests-emptydir-26wxd, resource: bindings, ignored listing per whitelist May 3 11:08:20.130: INFO: namespace e2e-tests-emptydir-26wxd deletion completed in 6.0980481s • [SLOW TEST:10.355 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:08:20.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:08:20.198: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:08:21.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-qkhdj" for this suite. May 3 11:08:27.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:08:27.408: INFO: namespace: e2e-tests-custom-resource-definition-qkhdj, resource: bindings, ignored listing per whitelist May 3 11:08:27.456: INFO: namespace e2e-tests-custom-resource-definition-qkhdj deletion completed in 6.164739662s • [SLOW TEST:7.326 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:08:27.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-thmgw May 3 11:08:31.579: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-thmgw STEP: checking the pod's current state and verifying that restartCount is present May 3 11:08:31.582: INFO: Initial restart count of pod liveness-http is 0 May 3 11:08:49.748: INFO: Restart count of pod e2e-tests-container-probe-thmgw/liveness-http is now 1 (18.166696989s elapsed) May 3 11:09:09.801: INFO: Restart count of pod e2e-tests-container-probe-thmgw/liveness-http is now 2 (38.219228178s elapsed) May 3 11:09:29.999: INFO: Restart count of pod e2e-tests-container-probe-thmgw/liveness-http is now 3 (58.417453474s elapsed) May 3 11:09:50.234: INFO: Restart count of pod e2e-tests-container-probe-thmgw/liveness-http is now 4 (1m18.652217548s elapsed) May 3 11:10:52.478: INFO: Restart count of pod e2e-tests-container-probe-thmgw/liveness-http is now 5 (2m20.896552498s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:10:52.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-thmgw" for this suite. May 3 11:10:58.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:10:58.675: INFO: namespace: e2e-tests-container-probe-thmgw, resource: bindings, ignored listing per whitelist May 3 11:10:58.696: INFO: namespace e2e-tests-container-probe-thmgw deletion completed in 6.15161529s • [SLOW TEST:151.239 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:10:58.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 3 11:10:58.819: INFO: Waiting up to 5m0s for pod "client-containers-c43517c5-8d2e-11ea-b78d-0242ac110017" in namespace "e2e-tests-containers-pmc9d" to be "success or failure" May 3 11:10:58.823: INFO: Pod "client-containers-c43517c5-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490672ms May 3 11:11:00.828: INFO: Pod "client-containers-c43517c5-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009220628s May 3 11:11:02.832: INFO: Pod "client-containers-c43517c5-8d2e-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013506283s STEP: Saw pod success May 3 11:11:02.832: INFO: Pod "client-containers-c43517c5-8d2e-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:11:02.836: INFO: Trying to get logs from node hunter-worker2 pod client-containers-c43517c5-8d2e-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 11:11:02.997: INFO: Waiting for pod client-containers-c43517c5-8d2e-11ea-b78d-0242ac110017 to disappear May 3 11:11:03.002: INFO: Pod client-containers-c43517c5-8d2e-11ea-b78d-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:11:03.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-pmc9d" for this suite. May 3 11:11:09.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:11:09.162: INFO: namespace: e2e-tests-containers-pmc9d, resource: bindings, ignored listing per whitelist May 3 11:11:09.202: INFO: namespace e2e-tests-containers-pmc9d deletion completed in 6.132597311s • [SLOW TEST:10.505 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:11:09.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ca8348d0-8d2e-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:11:09.423: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ca844869-8d2e-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-4kjgc" to be "success or failure" May 3 11:11:09.450: INFO: Pod "pod-projected-configmaps-ca844869-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 26.682243ms May 3 11:11:11.454: INFO: Pod "pod-projected-configmaps-ca844869-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031023861s May 3 11:11:13.459: INFO: Pod "pod-projected-configmaps-ca844869-8d2e-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.035692569s May 3 11:11:15.463: INFO: Pod "pod-projected-configmaps-ca844869-8d2e-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039940048s STEP: Saw pod success May 3 11:11:15.463: INFO: Pod "pod-projected-configmaps-ca844869-8d2e-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:11:15.466: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-ca844869-8d2e-11ea-b78d-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 3 11:11:15.554: INFO: Waiting for pod pod-projected-configmaps-ca844869-8d2e-11ea-b78d-0242ac110017 to disappear May 3 11:11:15.576: INFO: Pod pod-projected-configmaps-ca844869-8d2e-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:11:15.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4kjgc" for this suite. May 3 11:11:21.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:11:21.693: INFO: namespace: e2e-tests-projected-4kjgc, resource: bindings, ignored listing per whitelist May 3 11:11:21.748: INFO: namespace e2e-tests-projected-4kjgc deletion completed in 6.169396714s • [SLOW TEST:12.546 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:11:21.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 3 11:11:21.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:22.249: INFO: stderr: "" May 3 11:11:22.249: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 3 11:11:22.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:22.439: INFO: stderr: "" May 3 11:11:22.439: INFO: stdout: "update-demo-nautilus-9mc7t update-demo-nautilus-ndhmp " May 3 11:11:22.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mc7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:22.572: INFO: stderr: "" May 3 11:11:22.572: INFO: stdout: "" May 3 11:11:22.572: INFO: update-demo-nautilus-9mc7t is created but not running May 3 11:11:27.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:27.675: INFO: stderr: "" May 3 11:11:27.675: INFO: stdout: "update-demo-nautilus-9mc7t update-demo-nautilus-ndhmp " May 3 11:11:27.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mc7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:27.824: INFO: stderr: "" May 3 11:11:27.824: INFO: stdout: "true" May 3 11:11:27.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mc7t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:27.926: INFO: stderr: "" May 3 11:11:27.926: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 3 11:11:27.926: INFO: validating pod update-demo-nautilus-9mc7t May 3 11:11:27.930: INFO: got data: { "image": "nautilus.jpg" } May 3 11:11:27.930: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 3 11:11:27.930: INFO: update-demo-nautilus-9mc7t is verified up and running May 3 11:11:27.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ndhmp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:28.026: INFO: stderr: "" May 3 11:11:28.026: INFO: stdout: "true" May 3 11:11:28.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ndhmp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:28.125: INFO: stderr: "" May 3 11:11:28.125: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 3 11:11:28.125: INFO: validating pod update-demo-nautilus-ndhmp May 3 11:11:28.130: INFO: got data: { "image": "nautilus.jpg" } May 3 11:11:28.130: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 3 11:11:28.130: INFO: update-demo-nautilus-ndhmp is verified up and running STEP: rolling-update to new replication controller May 3 11:11:28.132: INFO: scanned /root for discovery docs: May 3 11:11:28.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:50.865: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 3 11:11:50.865: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 3 11:11:50.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:50.986: INFO: stderr: "" May 3 11:11:50.986: INFO: stdout: "update-demo-kitten-cnn67 update-demo-kitten-plf6x update-demo-nautilus-9mc7t " STEP: Replicas for name=update-demo: expected=2 actual=3 May 3 11:11:55.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:56.107: INFO: stderr: "" May 3 11:11:56.107: INFO: stdout: "update-demo-kitten-cnn67 update-demo-kitten-plf6x " May 3 11:11:56.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cnn67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:56.259: INFO: stderr: "" May 3 11:11:56.259: INFO: stdout: "true" May 3 11:11:56.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cnn67 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:56.360: INFO: stderr: "" May 3 11:11:56.360: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 3 11:11:56.360: INFO: validating pod update-demo-kitten-cnn67 May 3 11:11:56.363: INFO: got data: { "image": "kitten.jpg" } May 3 11:11:56.363: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 3 11:11:56.363: INFO: update-demo-kitten-cnn67 is verified up and running May 3 11:11:56.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-plf6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:56.451: INFO: stderr: "" May 3 11:11:56.451: INFO: stdout: "true" May 3 11:11:56.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-plf6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skgsv' May 3 11:11:56.546: INFO: stderr: "" May 3 11:11:56.546: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 3 11:11:56.546: INFO: validating pod update-demo-kitten-plf6x May 3 11:11:56.551: INFO: got data: { "image": "kitten.jpg" } May 3 11:11:56.551: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 3 11:11:56.551: INFO: update-demo-kitten-plf6x is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:11:56.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-skgsv" for this suite. May 3 11:12:20.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:12:20.599: INFO: namespace: e2e-tests-kubectl-skgsv, resource: bindings, ignored listing per whitelist May 3 11:12:20.673: INFO: namespace e2e-tests-kubectl-skgsv deletion completed in 24.096084928s • [SLOW TEST:58.925 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:12:20.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f511df87-8d2e-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 11:12:20.865: INFO: Waiting up to 5m0s for pod "pod-secrets-f51d980e-8d2e-11ea-b78d-0242ac110017" in namespace "e2e-tests-secrets-rdr8w" to be "success or failure" May 3 11:12:20.878: INFO: Pod "pod-secrets-f51d980e-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.152374ms May 3 11:12:22.906: INFO: Pod "pod-secrets-f51d980e-8d2e-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041140785s May 3 11:12:24.910: INFO: Pod "pod-secrets-f51d980e-8d2e-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044726472s STEP: Saw pod success May 3 11:12:24.910: INFO: Pod "pod-secrets-f51d980e-8d2e-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:12:24.912: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f51d980e-8d2e-11ea-b78d-0242ac110017 container secret-volume-test: STEP: delete the pod May 3 11:12:24.962: INFO: Waiting for pod pod-secrets-f51d980e-8d2e-11ea-b78d-0242ac110017 to disappear May 3 11:12:24.986: INFO: Pod pod-secrets-f51d980e-8d2e-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:12:24.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rdr8w" for this suite. May 3 11:12:31.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:12:31.028: INFO: namespace: e2e-tests-secrets-rdr8w, resource: bindings, ignored listing per whitelist May 3 11:12:31.080: INFO: namespace e2e-tests-secrets-rdr8w deletion completed in 6.08085108s • [SLOW TEST:10.406 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:12:31.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vf9qw [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-vf9qw STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-vf9qw May 3 11:12:31.262: INFO: Found 0 stateful pods, waiting for 1 May 3 11:12:41.266: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 3 11:12:41.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vf9qw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 11:12:41.499: INFO: stderr: "I0503 11:12:41.399300 908 log.go:172] (0xc0007f0160) (0xc000716640) Create stream\nI0503 11:12:41.399385 908 log.go:172] (0xc0007f0160) (0xc000716640) Stream added, broadcasting: 1\nI0503 11:12:41.402620 908 log.go:172] (0xc0007f0160) Reply frame received for 1\nI0503 11:12:41.402679 908 log.go:172] (0xc0007f0160) (0xc0007d2d20) Create stream\nI0503 11:12:41.402700 908 log.go:172] (0xc0007f0160) (0xc0007d2d20) Stream added, broadcasting: 3\nI0503 11:12:41.404004 908 log.go:172] (0xc0007f0160) Reply frame received for 3\nI0503 11:12:41.404052 908 log.go:172] (0xc0007f0160) (0xc0007166e0) Create stream\nI0503 11:12:41.404067 908 log.go:172] (0xc0007f0160) (0xc0007166e0) Stream added, broadcasting: 5\nI0503 11:12:41.405015 908 log.go:172] (0xc0007f0160) Reply frame received for 5\nI0503 11:12:41.492237 908 log.go:172] (0xc0007f0160) Data frame received for 5\nI0503 11:12:41.492292 908 log.go:172] (0xc0007166e0) (5) Data frame handling\nI0503 11:12:41.492322 908 log.go:172] (0xc0007f0160) Data frame received for 3\nI0503 11:12:41.492336 908 log.go:172] (0xc0007d2d20) (3) Data frame handling\nI0503 11:12:41.492348 908 log.go:172] (0xc0007d2d20) (3) Data frame sent\nI0503 11:12:41.492358 908 log.go:172] (0xc0007f0160) Data frame received for 3\nI0503 11:12:41.492366 908 log.go:172] (0xc0007d2d20) (3) Data frame handling\nI0503 11:12:41.494261 908 log.go:172] (0xc0007f0160) Data frame received for 1\nI0503 11:12:41.494291 908 log.go:172] (0xc000716640) (1) Data frame handling\nI0503 11:12:41.494307 908 log.go:172] (0xc000716640) (1) Data frame sent\nI0503 11:12:41.494325 908 log.go:172] (0xc0007f0160) (0xc000716640) Stream removed, broadcasting: 1\nI0503 11:12:41.494372 908 log.go:172] (0xc0007f0160) Go away received\nI0503 11:12:41.494638 908 log.go:172] (0xc0007f0160) (0xc000716640) Stream removed, broadcasting: 1\nI0503 11:12:41.494688 908 log.go:172] (0xc0007f0160) (0xc0007d2d20) Stream removed, broadcasting: 3\nI0503 11:12:41.494722 908 log.go:172] (0xc0007f0160) (0xc0007166e0) Stream removed, broadcasting: 5\n" May 3 11:12:41.499: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 11:12:41.499: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 3 11:12:41.502: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 3 11:12:51.508: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 3 11:12:51.508: INFO: Waiting for statefulset status.replicas updated to 0 May 3 11:12:51.526: INFO: POD NODE PHASE GRACE CONDITIONS May 3 11:12:51.526: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC }] May 3 11:12:51.526: INFO: May 3 11:12:51.526: INFO: StatefulSet ss has not reached scale 3, at 1 May 3 11:12:52.532: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991617682s May 3 11:12:53.686: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986510157s May 3 11:12:55.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.831858368s May 3 11:12:56.021: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.502116116s May 3 11:12:57.238: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.496602878s May 3 11:12:58.301: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.279654297s May 3 11:12:59.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.216860653s May 3 11:13:00.340: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.206703603s May 3 11:13:01.344: INFO: Verifying statefulset ss doesn't scale past 3 for another 178.261159ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-vf9qw May 3 11:13:02.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vf9qw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 3 11:13:02.546: INFO: stderr: "I0503 11:13:02.468026 930 log.go:172] (0xc0008502c0) (0xc000712640) Create stream\nI0503 11:13:02.468088 930 log.go:172] (0xc0008502c0) (0xc000712640) Stream added, broadcasting: 1\nI0503 11:13:02.470674 930 log.go:172] (0xc0008502c0) Reply frame received for 1\nI0503 11:13:02.470722 930 log.go:172] (0xc0008502c0) (0xc0007a8e60) Create stream\nI0503 11:13:02.470735 930 log.go:172] (0xc0008502c0) (0xc0007a8e60) Stream added, broadcasting: 3\nI0503 11:13:02.471492 930 log.go:172] (0xc0008502c0) Reply frame received for 3\nI0503 11:13:02.471529 930 log.go:172] (0xc0008502c0) (0xc0007126e0) Create stream\nI0503 11:13:02.471540 930 log.go:172] (0xc0008502c0) (0xc0007126e0) Stream added, broadcasting: 5\nI0503 11:13:02.472294 930 log.go:172] (0xc0008502c0) Reply frame received for 5\nI0503 11:13:02.539536 930 log.go:172] (0xc0008502c0) Data frame received for 3\nI0503 11:13:02.539588 930 log.go:172] (0xc0007a8e60) (3) Data frame handling\nI0503 11:13:02.539600 930 log.go:172] (0xc0007a8e60) (3) Data frame sent\nI0503 11:13:02.539607 930 log.go:172] (0xc0008502c0) Data frame received for 3\nI0503 11:13:02.539614 930 log.go:172] (0xc0007a8e60) (3) Data frame handling\nI0503 11:13:02.539645 930 log.go:172] (0xc0008502c0) Data frame received for 5\nI0503 11:13:02.539656 930 log.go:172] (0xc0007126e0) (5) Data frame handling\nI0503 11:13:02.541771 930 log.go:172] (0xc0008502c0) Data frame received for 1\nI0503 11:13:02.541801 930 log.go:172] (0xc000712640) (1) Data frame handling\nI0503 11:13:02.541823 930 log.go:172] (0xc000712640) (1) Data frame sent\nI0503 11:13:02.541835 930 log.go:172] (0xc0008502c0) (0xc000712640) Stream removed, broadcasting: 1\nI0503 11:13:02.541850 930 log.go:172] (0xc0008502c0) Go away received\nI0503 11:13:02.542094 930 log.go:172] (0xc0008502c0) (0xc000712640) Stream removed, broadcasting: 1\nI0503 11:13:02.542129 930 log.go:172] (0xc0008502c0) (0xc0007a8e60) Stream removed, broadcasting: 3\nI0503 11:13:02.542150 930 log.go:172] (0xc0008502c0) (0xc0007126e0) Stream removed, broadcasting: 5\n" May 3 11:13:02.546: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 3 11:13:02.546: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 3 11:13:02.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vf9qw ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 3 11:13:02.802: INFO: stderr: "I0503 11:13:02.728228 952 log.go:172] (0xc00083c2c0) (0xc000748640) Create stream\nI0503 11:13:02.728336 952 log.go:172] (0xc00083c2c0) (0xc000748640) Stream added, broadcasting: 1\nI0503 11:13:02.730285 952 log.go:172] (0xc00083c2c0) Reply frame received for 1\nI0503 11:13:02.730328 952 log.go:172] (0xc00083c2c0) (0xc000682be0) Create stream\nI0503 11:13:02.730354 952 log.go:172] (0xc00083c2c0) (0xc000682be0) Stream added, broadcasting: 3\nI0503 11:13:02.731107 952 log.go:172] (0xc00083c2c0) Reply frame received for 3\nI0503 11:13:02.731146 952 log.go:172] (0xc00083c2c0) (0xc00040a000) Create stream\nI0503 11:13:02.731164 952 log.go:172] (0xc00083c2c0) (0xc00040a000) Stream added, broadcasting: 5\nI0503 11:13:02.731945 952 log.go:172] (0xc00083c2c0) Reply frame received for 5\nI0503 11:13:02.798185 952 log.go:172] (0xc00083c2c0) Data frame received for 5\nI0503 11:13:02.798300 952 log.go:172] (0xc00040a000) (5) Data frame handling\nI0503 11:13:02.798321 952 log.go:172] (0xc00040a000) (5) Data frame sent\nI0503 11:13:02.798331 952 log.go:172] (0xc00083c2c0) Data frame received for 5\nI0503 11:13:02.798338 952 log.go:172] (0xc00040a000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0503 11:13:02.798379 952 log.go:172] (0xc00083c2c0) Data frame received for 3\nI0503 11:13:02.798405 952 log.go:172] (0xc000682be0) (3) Data frame handling\nI0503 11:13:02.798427 952 log.go:172] (0xc000682be0) (3) Data frame sent\nI0503 11:13:02.798439 952 log.go:172] (0xc00083c2c0) Data frame received for 3\nI0503 11:13:02.798448 952 log.go:172] (0xc000682be0) (3) Data frame handling\nI0503 11:13:02.799720 952 log.go:172] (0xc00083c2c0) Data frame received for 1\nI0503 11:13:02.799737 952 log.go:172] (0xc000748640) (1) Data frame handling\nI0503 11:13:02.799748 952 log.go:172] (0xc000748640) (1) Data frame sent\nI0503 11:13:02.799764 952 log.go:172] (0xc00083c2c0) (0xc000748640) Stream removed, broadcasting: 1\nI0503 11:13:02.799852 952 log.go:172] (0xc00083c2c0) Go away received\nI0503 11:13:02.799911 952 log.go:172] (0xc00083c2c0) (0xc000748640) Stream removed, broadcasting: 1\nI0503 11:13:02.799928 952 log.go:172] (0xc00083c2c0) (0xc000682be0) Stream removed, broadcasting: 3\nI0503 11:13:02.799942 952 log.go:172] (0xc00083c2c0) (0xc00040a000) Stream removed, broadcasting: 5\n" May 3 11:13:02.802: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 3 11:13:02.803: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 3 11:13:02.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vf9qw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 3 11:13:03.016: INFO: stderr: "I0503 11:13:02.927385 975 log.go:172] (0xc0006d0210) (0xc00072c5a0) Create stream\nI0503 11:13:02.927449 975 log.go:172] (0xc0006d0210) (0xc00072c5a0) Stream added, broadcasting: 1\nI0503 11:13:02.930015 975 log.go:172] (0xc0006d0210) Reply frame received for 1\nI0503 11:13:02.930080 975 log.go:172] (0xc0006d0210) (0xc0004b8be0) Create stream\nI0503 11:13:02.930110 975 log.go:172] (0xc0006d0210) (0xc0004b8be0) Stream added, broadcasting: 3\nI0503 11:13:02.931229 975 log.go:172] (0xc0006d0210) Reply frame received for 3\nI0503 11:13:02.931262 975 log.go:172] (0xc0006d0210) (0xc00035c000) Create stream\nI0503 11:13:02.931271 975 log.go:172] (0xc0006d0210) (0xc00035c000) Stream added, broadcasting: 5\nI0503 11:13:02.932171 975 log.go:172] (0xc0006d0210) Reply frame received for 5\nI0503 11:13:03.010691 975 log.go:172] (0xc0006d0210) Data frame received for 5\nI0503 11:13:03.010741 975 log.go:172] (0xc00035c000) (5) Data frame handling\nI0503 11:13:03.010761 975 log.go:172] (0xc00035c000) (5) Data frame sent\nI0503 11:13:03.010773 975 log.go:172] (0xc0006d0210) Data frame received for 5\nI0503 11:13:03.010791 975 log.go:172] (0xc00035c000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0503 11:13:03.010841 975 log.go:172] (0xc0006d0210) Data frame received for 3\nI0503 11:13:03.010890 975 log.go:172] (0xc0004b8be0) (3) Data frame handling\nI0503 11:13:03.010916 975 log.go:172] (0xc0004b8be0) (3) Data frame sent\nI0503 11:13:03.010937 975 log.go:172] (0xc0006d0210) Data frame received for 3\nI0503 11:13:03.010952 975 log.go:172] (0xc0004b8be0) (3) Data frame handling\nI0503 11:13:03.012582 975 log.go:172] (0xc0006d0210) Data frame received for 1\nI0503 11:13:03.012620 975 log.go:172] (0xc00072c5a0) (1) Data frame handling\nI0503 11:13:03.012659 975 log.go:172] (0xc00072c5a0) (1) Data frame sent\nI0503 11:13:03.012692 975 log.go:172] (0xc0006d0210) (0xc00072c5a0) Stream removed, broadcasting: 1\nI0503 11:13:03.012808 975 log.go:172] (0xc0006d0210) Go away received\nI0503 11:13:03.013066 975 log.go:172] (0xc0006d0210) (0xc00072c5a0) Stream removed, broadcasting: 1\nI0503 11:13:03.013108 975 log.go:172] (0xc0006d0210) (0xc0004b8be0) Stream removed, broadcasting: 3\nI0503 11:13:03.013495 975 log.go:172] (0xc0006d0210) (0xc00035c000) Stream removed, broadcasting: 5\n" May 3 11:13:03.017: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 3 11:13:03.017: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 3 11:13:03.046: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 3 11:13:13.075: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 3 11:13:13.075: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 3 11:13:13.075: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 3 11:13:13.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vf9qw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 11:13:13.293: INFO: stderr: "I0503 11:13:13.198797 997 log.go:172] (0xc0008942c0) (0xc00075e640) Create stream\nI0503 11:13:13.198984 997 log.go:172] (0xc0008942c0) (0xc00075e640) Stream added, broadcasting: 1\nI0503 11:13:13.200865 997 log.go:172] (0xc0008942c0) Reply frame received for 1\nI0503 11:13:13.200893 997 log.go:172] (0xc0008942c0) (0xc000680d20) Create stream\nI0503 11:13:13.200905 997 log.go:172] (0xc0008942c0) (0xc000680d20) Stream added, broadcasting: 3\nI0503 11:13:13.201772 997 log.go:172] (0xc0008942c0) Reply frame received for 3\nI0503 11:13:13.201800 997 log.go:172] (0xc0008942c0) (0xc00075e6e0) Create stream\nI0503 11:13:13.201811 997 log.go:172] (0xc0008942c0) (0xc00075e6e0) Stream added, broadcasting: 5\nI0503 11:13:13.202610 997 log.go:172] (0xc0008942c0) Reply frame received for 5\nI0503 11:13:13.288780 997 log.go:172] (0xc0008942c0) Data frame received for 3\nI0503 11:13:13.288825 997 log.go:172] (0xc000680d20) (3) Data frame handling\nI0503 11:13:13.288838 997 log.go:172] (0xc000680d20) (3) Data frame sent\nI0503 11:13:13.288847 997 log.go:172] (0xc0008942c0) Data frame received for 3\nI0503 11:13:13.288854 997 log.go:172] (0xc000680d20) (3) Data frame handling\nI0503 11:13:13.288889 997 log.go:172] (0xc0008942c0) Data frame received for 5\nI0503 11:13:13.288897 997 log.go:172] (0xc00075e6e0) (5) Data frame handling\nI0503 11:13:13.290139 997 log.go:172] (0xc0008942c0) Data frame received for 1\nI0503 11:13:13.290172 997 log.go:172] (0xc00075e640) (1) Data frame handling\nI0503 11:13:13.290189 997 log.go:172] (0xc00075e640) (1) Data frame sent\nI0503 11:13:13.290219 997 log.go:172] (0xc0008942c0) (0xc00075e640) Stream removed, broadcasting: 1\nI0503 11:13:13.290250 997 log.go:172] (0xc0008942c0) Go away received\nI0503 11:13:13.290454 997 log.go:172] (0xc0008942c0) (0xc00075e640) Stream removed, broadcasting: 1\nI0503 11:13:13.290481 997 log.go:172] (0xc0008942c0) (0xc000680d20) Stream removed, broadcasting: 3\nI0503 11:13:13.290492 997 log.go:172] (0xc0008942c0) (0xc00075e6e0) Stream removed, broadcasting: 5\n" May 3 11:13:13.293: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 11:13:13.293: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 3 11:13:13.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vf9qw ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 11:13:13.598: INFO: stderr: "I0503 11:13:13.419107 1021 log.go:172] (0xc000138580) (0xc000597400) Create stream\nI0503 11:13:13.419164 1021 log.go:172] (0xc000138580) (0xc000597400) Stream added, broadcasting: 1\nI0503 11:13:13.421025 1021 log.go:172] (0xc000138580) Reply frame received for 1\nI0503 11:13:13.421071 1021 log.go:172] (0xc000138580) (0xc0002cc000) Create stream\nI0503 11:13:13.421088 1021 log.go:172] (0xc000138580) (0xc0002cc000) Stream added, broadcasting: 3\nI0503 11:13:13.421990 1021 log.go:172] (0xc000138580) Reply frame received for 3\nI0503 11:13:13.422029 1021 log.go:172] (0xc000138580) (0xc0006b2000) Create stream\nI0503 11:13:13.422037 1021 log.go:172] (0xc000138580) (0xc0006b2000) Stream added, broadcasting: 5\nI0503 11:13:13.422697 1021 log.go:172] (0xc000138580) Reply frame received for 5\nI0503 11:13:13.590612 1021 log.go:172] (0xc000138580) Data frame received for 3\nI0503 11:13:13.590763 1021 log.go:172] (0xc0002cc000) (3) Data frame handling\nI0503 11:13:13.590864 1021 log.go:172] (0xc0002cc000) (3) Data frame sent\nI0503 11:13:13.590899 1021 log.go:172] (0xc000138580) Data frame received for 3\nI0503 11:13:13.590919 1021 log.go:172] (0xc0002cc000) (3) Data frame handling\nI0503 11:13:13.590964 1021 log.go:172] (0xc000138580) Data frame received for 5\nI0503 11:13:13.591014 1021 log.go:172] (0xc0006b2000) (5) Data frame handling\nI0503 11:13:13.593298 1021 log.go:172] (0xc000138580) Data frame received for 1\nI0503 11:13:13.593331 1021 log.go:172] (0xc000597400) (1) Data frame handling\nI0503 11:13:13.593342 1021 log.go:172] (0xc000597400) (1) Data frame sent\nI0503 11:13:13.593354 1021 log.go:172] (0xc000138580) (0xc000597400) Stream removed, broadcasting: 1\nI0503 11:13:13.593504 1021 log.go:172] (0xc000138580) (0xc000597400) Stream removed, broadcasting: 1\nI0503 11:13:13.593530 1021 log.go:172] (0xc000138580) (0xc0002cc000) Stream removed, broadcasting: 3\nI0503 11:13:13.593546 1021 log.go:172] (0xc000138580) (0xc0006b2000) Stream removed, broadcasting: 5\n" May 3 11:13:13.598: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 11:13:13.598: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 3 11:13:13.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vf9qw ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 11:13:13.847: INFO: stderr: "I0503 11:13:13.717812 1043 log.go:172] (0xc0008042c0) (0xc000629400) Create stream\nI0503 11:13:13.717876 1043 log.go:172] (0xc0008042c0) (0xc000629400) Stream added, broadcasting: 1\nI0503 11:13:13.719790 1043 log.go:172] (0xc0008042c0) Reply frame received for 1\nI0503 11:13:13.719825 1043 log.go:172] (0xc0008042c0) (0xc000350000) Create stream\nI0503 11:13:13.719834 1043 log.go:172] (0xc0008042c0) (0xc000350000) Stream added, broadcasting: 3\nI0503 11:13:13.720620 1043 log.go:172] (0xc0008042c0) Reply frame received for 3\nI0503 11:13:13.720648 1043 log.go:172] (0xc0008042c0) (0xc00052a000) Create stream\nI0503 11:13:13.720656 1043 log.go:172] (0xc0008042c0) (0xc00052a000) Stream added, broadcasting: 5\nI0503 11:13:13.721304 1043 log.go:172] (0xc0008042c0) Reply frame received for 5\nI0503 11:13:13.838205 1043 log.go:172] (0xc0008042c0) Data frame received for 3\nI0503 11:13:13.838255 1043 log.go:172] (0xc000350000) (3) Data frame handling\nI0503 11:13:13.838289 1043 log.go:172] (0xc000350000) (3) Data frame sent\nI0503 11:13:13.838308 1043 log.go:172] (0xc0008042c0) Data frame received for 3\nI0503 11:13:13.838324 1043 log.go:172] (0xc000350000) (3) Data frame handling\nI0503 11:13:13.839132 1043 log.go:172] (0xc0008042c0) Data frame received for 5\nI0503 11:13:13.839145 1043 log.go:172] (0xc00052a000) (5) Data frame handling\nI0503 11:13:13.842382 1043 log.go:172] (0xc0008042c0) Data frame received for 1\nI0503 11:13:13.842408 1043 log.go:172] (0xc000629400) (1) Data frame handling\nI0503 11:13:13.842429 1043 log.go:172] (0xc000629400) (1) Data frame sent\nI0503 11:13:13.842443 1043 log.go:172] (0xc0008042c0) (0xc000629400) Stream removed, broadcasting: 1\nI0503 11:13:13.842459 1043 log.go:172] (0xc0008042c0) Go away received\nI0503 11:13:13.842667 1043 log.go:172] (0xc0008042c0) (0xc000629400) Stream removed, broadcasting: 1\nI0503 11:13:13.842688 1043 log.go:172] (0xc0008042c0) (0xc000350000) Stream removed, broadcasting: 3\nI0503 11:13:13.842701 1043 log.go:172] (0xc0008042c0) (0xc00052a000) Stream removed, broadcasting: 5\n" May 3 11:13:13.847: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 11:13:13.847: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 3 11:13:13.847: INFO: Waiting for statefulset status.replicas updated to 0 May 3 11:13:13.850: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 3 11:13:23.858: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 3 11:13:23.858: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 3 11:13:23.858: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 3 11:13:23.875: INFO: POD NODE PHASE GRACE CONDITIONS May 3 11:13:23.875: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC }] May 3 11:13:23.875: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:23.875: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:23.875: INFO: May 3 11:13:23.875: INFO: StatefulSet ss has not reached scale 0, at 3 May 3 11:13:24.893: INFO: POD NODE PHASE GRACE CONDITIONS May 3 11:13:24.893: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC }] May 3 11:13:24.893: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:24.893: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:24.893: INFO: May 3 11:13:24.893: INFO: StatefulSet ss has not reached scale 0, at 3 May 3 11:13:25.899: INFO: POD NODE PHASE GRACE CONDITIONS May 3 11:13:25.899: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC }] May 3 11:13:25.899: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:25.899: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:25.899: INFO: May 3 11:13:25.899: INFO: StatefulSet ss has not reached scale 0, at 3 May 3 11:13:26.903: INFO: POD NODE PHASE GRACE CONDITIONS May 3 11:13:26.903: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC }] May 3 11:13:26.903: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:26.903: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:26.903: INFO: May 3 11:13:26.903: INFO: StatefulSet ss has not reached scale 0, at 3 May 3 11:13:27.932: INFO: POD NODE PHASE GRACE CONDITIONS May 3 11:13:27.932: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC }] May 3 11:13:27.932: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:27.932: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:27.932: INFO: May 3 11:13:27.932: INFO: StatefulSet ss has not reached scale 0, at 3 May 3 11:13:29.014: INFO: POD NODE PHASE GRACE CONDITIONS May 3 11:13:29.014: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC }] May 3 11:13:29.014: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:29.014: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:29.014: INFO: May 3 11:13:29.014: INFO: StatefulSet ss has not reached scale 0, at 3 May 3 11:13:30.019: INFO: POD NODE PHASE GRACE CONDITIONS May 3 11:13:30.019: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC }] May 3 11:13:30.019: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:30.019: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:30.019: INFO: May 3 11:13:30.019: INFO: StatefulSet ss has not reached scale 0, at 3 May 3 11:13:31.025: INFO: POD NODE PHASE GRACE CONDITIONS May 3 11:13:31.025: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:31 +0000 UTC }] May 3 11:13:31.025: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:31.025: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:13:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:12:51 +0000 UTC }] May 3 11:13:31.025: INFO: May 3 11:13:31.025: INFO: StatefulSet ss has not reached scale 0, at 3 May 3 11:13:32.028: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.839380018s May 3 11:13:33.032: INFO: Verifying statefulset ss doesn't scale past 0 for another 836.036619ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-vf9qw May 3 11:13:34.036: INFO: Scaling statefulset ss to 0 May 3 11:13:34.046: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 3 11:13:34.048: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vf9qw May 3 11:13:34.051: INFO: Scaling statefulset ss to 0 May 3 11:13:34.058: INFO: Waiting for statefulset status.replicas updated to 0 May 3 11:13:34.060: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:13:34.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vf9qw" for this suite. May 3 11:13:40.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:13:40.288: INFO: namespace: e2e-tests-statefulset-vf9qw, resource: bindings, ignored listing per whitelist May 3 11:13:40.313: INFO: namespace e2e-tests-statefulset-vf9qw deletion completed in 6.205806147s • [SLOW TEST:69.233 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:13:40.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-249339a2-8d2f-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:13:40.531: INFO: Waiting up to 5m0s for pod "pod-configmaps-2498ebe8-8d2f-11ea-b78d-0242ac110017" in namespace "e2e-tests-configmap-kwpr9" to be "success or failure" May 3 11:13:40.546: INFO: Pod "pod-configmaps-2498ebe8-8d2f-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.255156ms May 3 11:13:42.602: INFO: Pod "pod-configmaps-2498ebe8-8d2f-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07152344s May 3 11:13:44.607: INFO: Pod "pod-configmaps-2498ebe8-8d2f-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076012306s STEP: Saw pod success May 3 11:13:44.607: INFO: Pod "pod-configmaps-2498ebe8-8d2f-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:13:44.610: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2498ebe8-8d2f-11ea-b78d-0242ac110017 container configmap-volume-test: STEP: delete the pod May 3 11:13:44.652: INFO: Waiting for pod pod-configmaps-2498ebe8-8d2f-11ea-b78d-0242ac110017 to disappear May 3 11:13:44.657: INFO: Pod pod-configmaps-2498ebe8-8d2f-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:13:44.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kwpr9" for this suite. May 3 11:13:52.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:13:53.153: INFO: namespace: e2e-tests-configmap-kwpr9, resource: bindings, ignored listing per whitelist May 3 11:13:53.154: INFO: namespace e2e-tests-configmap-kwpr9 deletion completed in 8.493183844s • [SLOW TEST:12.840 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:13:53.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-2c372dc6-8d2f-11ea-b78d-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-2c372e0c-8d2f-11ea-b78d-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2c372dc6-8d2f-11ea-b78d-0242ac110017 STEP: Updating configmap cm-test-opt-upd-2c372e0c-8d2f-11ea-b78d-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-2c372e2b-8d2f-11ea-b78d-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:15:05.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2wgvv" for this suite. May 3 11:15:27.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:15:28.011: INFO: namespace: e2e-tests-configmap-2wgvv, resource: bindings, ignored listing per whitelist May 3 11:15:28.035: INFO: namespace e2e-tests-configmap-2wgvv deletion completed in 22.108813614s • [SLOW TEST:94.881 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:15:28.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:15:32.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-9mm65" for this suite. May 3 11:16:10.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:16:10.272: INFO: namespace: e2e-tests-kubelet-test-9mm65, resource: bindings, ignored listing per whitelist May 3 11:16:10.272: INFO: namespace e2e-tests-kubelet-test-9mm65 deletion completed in 38.098365629s • [SLOW TEST:42.237 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:16:10.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 3 11:16:10.421: INFO: Waiting up to 5m0s for pod "pod-7de8adf5-8d2f-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-2tg89" to be "success or failure" May 3 11:16:10.430: INFO: Pod "pod-7de8adf5-8d2f-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.689912ms May 3 11:16:12.434: INFO: Pod "pod-7de8adf5-8d2f-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012926388s May 3 11:16:14.438: INFO: Pod "pod-7de8adf5-8d2f-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016994398s STEP: Saw pod success May 3 11:16:14.438: INFO: Pod "pod-7de8adf5-8d2f-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:16:14.441: INFO: Trying to get logs from node hunter-worker2 pod pod-7de8adf5-8d2f-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 11:16:14.464: INFO: Waiting for pod pod-7de8adf5-8d2f-11ea-b78d-0242ac110017 to disappear May 3 11:16:14.468: INFO: Pod pod-7de8adf5-8d2f-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:16:14.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2tg89" for this suite. May 3 11:16:20.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:16:20.732: INFO: namespace: e2e-tests-emptydir-2tg89, resource: bindings, ignored listing per whitelist May 3 11:16:20.772: INFO: namespace e2e-tests-emptydir-2tg89 deletion completed in 6.300931319s • [SLOW TEST:10.500 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:16:20.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-b5hbf May 3 11:16:24.938: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-b5hbf STEP: checking the pod's current state and verifying that restartCount is present May 3 11:16:24.940: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:20:26.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-b5hbf" for this suite. May 3 11:20:32.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:20:32.467: INFO: namespace: e2e-tests-container-probe-b5hbf, resource: bindings, ignored listing per whitelist May 3 11:20:32.546: INFO: namespace e2e-tests-container-probe-b5hbf deletion completed in 6.138360163s • [SLOW TEST:251.773 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:20:32.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-1a4338f9-8d30-11ea-b78d-0242ac110017 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:20:40.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mdtnc" for this suite. May 3 11:21:02.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:21:02.743: INFO: namespace: e2e-tests-configmap-mdtnc, resource: bindings, ignored listing per whitelist May 3 11:21:02.839: INFO: namespace e2e-tests-configmap-mdtnc deletion completed in 22.127211842s • [SLOW TEST:30.293 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:21:02.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:21:07.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-97lv2" for this suite. May 3 11:21:13.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:21:13.654: INFO: namespace: e2e-tests-emptydir-wrapper-97lv2, resource: bindings, ignored listing per whitelist May 3 11:21:13.701: INFO: namespace e2e-tests-emptydir-wrapper-97lv2 deletion completed in 6.429320283s • [SLOW TEST:10.862 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:21:13.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 3 11:21:14.243: INFO: Waiting up to 5m0s for pod "downward-api-32e9be76-8d30-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-ccjlq" to be "success or failure" May 3 11:21:14.424: INFO: Pod "downward-api-32e9be76-8d30-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 180.268036ms May 3 11:21:16.427: INFO: Pod "downward-api-32e9be76-8d30-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183858713s May 3 11:21:18.431: INFO: Pod "downward-api-32e9be76-8d30-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.187781327s May 3 11:21:20.436: INFO: Pod "downward-api-32e9be76-8d30-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.192223981s STEP: Saw pod success May 3 11:21:20.436: INFO: Pod "downward-api-32e9be76-8d30-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:21:20.439: INFO: Trying to get logs from node hunter-worker2 pod downward-api-32e9be76-8d30-11ea-b78d-0242ac110017 container dapi-container: STEP: delete the pod May 3 11:21:20.465: INFO: Waiting for pod downward-api-32e9be76-8d30-11ea-b78d-0242ac110017 to disappear May 3 11:21:20.468: INFO: Pod downward-api-32e9be76-8d30-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:21:20.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ccjlq" for this suite. May 3 11:21:26.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:21:26.493: INFO: namespace: e2e-tests-downward-api-ccjlq, resource: bindings, ignored listing per whitelist May 3 11:21:26.586: INFO: namespace e2e-tests-downward-api-ccjlq deletion completed in 6.115278519s • [SLOW TEST:12.885 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:21:26.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 3 11:21:27.300: INFO: Pod name wrapped-volume-race-3ac81dd7-8d30-11ea-b78d-0242ac110017: Found 0 pods out of 5 May 3 11:21:32.309: INFO: Pod name wrapped-volume-race-3ac81dd7-8d30-11ea-b78d-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3ac81dd7-8d30-11ea-b78d-0242ac110017 in namespace e2e-tests-emptydir-wrapper-gdkm6, will wait for the garbage collector to delete the pods May 3 11:24:14.395: INFO: Deleting ReplicationController wrapped-volume-race-3ac81dd7-8d30-11ea-b78d-0242ac110017 took: 7.464072ms May 3 11:24:14.596: INFO: Terminating ReplicationController wrapped-volume-race-3ac81dd7-8d30-11ea-b78d-0242ac110017 pods took: 200.271466ms STEP: Creating RC which spawns configmap-volume pods May 3 11:24:51.865: INFO: Pod name wrapped-volume-race-b4b58273-8d30-11ea-b78d-0242ac110017: Found 0 pods out of 5 May 3 11:24:56.871: INFO: Pod name wrapped-volume-race-b4b58273-8d30-11ea-b78d-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b4b58273-8d30-11ea-b78d-0242ac110017 in namespace e2e-tests-emptydir-wrapper-gdkm6, will wait for the garbage collector to delete the pods May 3 11:27:32.974: INFO: Deleting ReplicationController wrapped-volume-race-b4b58273-8d30-11ea-b78d-0242ac110017 took: 7.249663ms May 3 11:27:33.174: INFO: Terminating ReplicationController wrapped-volume-race-b4b58273-8d30-11ea-b78d-0242ac110017 pods took: 200.231262ms STEP: Creating RC which spawns configmap-volume pods May 3 11:28:11.836: INFO: Pod name wrapped-volume-race-2be7d03e-8d31-11ea-b78d-0242ac110017: Found 0 pods out of 5 May 3 11:28:16.843: INFO: Pod name wrapped-volume-race-2be7d03e-8d31-11ea-b78d-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2be7d03e-8d31-11ea-b78d-0242ac110017 in namespace e2e-tests-emptydir-wrapper-gdkm6, will wait for the garbage collector to delete the pods May 3 11:30:28.980: INFO: Deleting ReplicationController wrapped-volume-race-2be7d03e-8d31-11ea-b78d-0242ac110017 took: 7.995784ms May 3 11:30:29.081: INFO: Terminating ReplicationController wrapped-volume-race-2be7d03e-8d31-11ea-b78d-0242ac110017 pods took: 100.220689ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:31:12.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-gdkm6" for this suite. May 3 11:31:20.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:31:20.513: INFO: namespace: e2e-tests-emptydir-wrapper-gdkm6, resource: bindings, ignored listing per whitelist May 3 11:31:20.569: INFO: namespace e2e-tests-emptydir-wrapper-gdkm6 deletion completed in 8.09080967s • [SLOW TEST:593.983 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:31:20.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xmsnm STEP: creating a selector STEP: Creating the service pods in kubernetes May 3 11:31:20.662: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 3 11:31:44.817: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.29 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xmsnm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 11:31:44.817: INFO: >>> kubeConfig: /root/.kube/config I0503 11:31:44.849859 6 log.go:172] (0xc000da0580) (0xc0015e2aa0) Create stream I0503 11:31:44.849888 6 log.go:172] (0xc000da0580) (0xc0015e2aa0) Stream added, broadcasting: 1 I0503 11:31:44.851932 6 log.go:172] (0xc000da0580) Reply frame received for 1 I0503 11:31:44.851959 6 log.go:172] (0xc000da0580) (0xc0015e2b40) Create stream I0503 11:31:44.851968 6 log.go:172] (0xc000da0580) (0xc0015e2b40) Stream added, broadcasting: 3 I0503 11:31:44.852824 6 log.go:172] (0xc000da0580) Reply frame received for 3 I0503 11:31:44.852847 6 log.go:172] (0xc000da0580) (0xc001c520a0) Create stream I0503 11:31:44.852864 6 log.go:172] (0xc000da0580) (0xc001c520a0) Stream added, broadcasting: 5 I0503 11:31:44.853869 6 log.go:172] (0xc000da0580) Reply frame received for 5 I0503 11:31:45.934733 6 log.go:172] (0xc000da0580) Data frame received for 3 I0503 11:31:45.934760 6 log.go:172] (0xc0015e2b40) (3) Data frame handling I0503 11:31:45.934777 6 log.go:172] (0xc0015e2b40) (3) Data frame sent I0503 11:31:45.934781 6 log.go:172] (0xc000da0580) Data frame received for 3 I0503 11:31:45.934785 6 log.go:172] (0xc0015e2b40) (3) Data frame handling I0503 11:31:45.935042 6 log.go:172] (0xc000da0580) Data frame received for 5 I0503 11:31:45.935109 6 log.go:172] (0xc001c520a0) (5) Data frame handling I0503 11:31:45.937752 6 log.go:172] (0xc000da0580) Data frame received for 1 I0503 11:31:45.937777 6 log.go:172] (0xc0015e2aa0) (1) Data frame handling I0503 11:31:45.937796 6 log.go:172] (0xc0015e2aa0) (1) Data frame sent I0503 11:31:45.937814 6 log.go:172] (0xc000da0580) (0xc0015e2aa0) Stream removed, broadcasting: 1 I0503 11:31:45.937831 6 log.go:172] (0xc000da0580) Go away received I0503 11:31:45.937991 6 log.go:172] (0xc000da0580) (0xc0015e2aa0) Stream removed, broadcasting: 1 I0503 11:31:45.938035 6 log.go:172] (0xc000da0580) (0xc0015e2b40) Stream removed, broadcasting: 3 I0503 11:31:45.938050 6 log.go:172] (0xc000da0580) (0xc001c520a0) Stream removed, broadcasting: 5 May 3 11:31:45.938: INFO: Found all expected endpoints: [netserver-0] May 3 11:31:45.942: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.40 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xmsnm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 11:31:45.942: INFO: >>> kubeConfig: /root/.kube/config I0503 11:31:45.978122 6 log.go:172] (0xc0000eaf20) (0xc001ac0a00) Create stream I0503 11:31:45.978156 6 log.go:172] (0xc0000eaf20) (0xc001ac0a00) Stream added, broadcasting: 1 I0503 11:31:45.980522 6 log.go:172] (0xc0000eaf20) Reply frame received for 1 I0503 11:31:45.980567 6 log.go:172] (0xc0000eaf20) (0xc0015e2be0) Create stream I0503 11:31:45.980580 6 log.go:172] (0xc0000eaf20) (0xc0015e2be0) Stream added, broadcasting: 3 I0503 11:31:45.981926 6 log.go:172] (0xc0000eaf20) Reply frame received for 3 I0503 11:31:45.981985 6 log.go:172] (0xc0000eaf20) (0xc001ac0aa0) Create stream I0503 11:31:45.981999 6 log.go:172] (0xc0000eaf20) (0xc001ac0aa0) Stream added, broadcasting: 5 I0503 11:31:45.983182 6 log.go:172] (0xc0000eaf20) Reply frame received for 5 I0503 11:31:47.065010 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0503 11:31:47.065042 6 log.go:172] (0xc0015e2be0) (3) Data frame handling I0503 11:31:47.065059 6 log.go:172] (0xc0015e2be0) (3) Data frame sent I0503 11:31:47.065070 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0503 11:31:47.065098 6 log.go:172] (0xc0015e2be0) (3) Data frame handling I0503 11:31:47.065398 6 log.go:172] (0xc0000eaf20) Data frame received for 5 I0503 11:31:47.065419 6 log.go:172] (0xc001ac0aa0) (5) Data frame handling I0503 11:31:47.067249 6 log.go:172] (0xc0000eaf20) Data frame received for 1 I0503 11:31:47.067299 6 log.go:172] (0xc001ac0a00) (1) Data frame handling I0503 11:31:47.067336 6 log.go:172] (0xc001ac0a00) (1) Data frame sent I0503 11:31:47.067382 6 log.go:172] (0xc0000eaf20) (0xc001ac0a00) Stream removed, broadcasting: 1 I0503 11:31:47.067533 6 log.go:172] (0xc0000eaf20) Go away received I0503 11:31:47.067576 6 log.go:172] (0xc0000eaf20) (0xc001ac0a00) Stream removed, broadcasting: 1 I0503 11:31:47.067630 6 log.go:172] (0xc0000eaf20) (0xc0015e2be0) Stream removed, broadcasting: 3 I0503 11:31:47.067647 6 log.go:172] (0xc0000eaf20) (0xc001ac0aa0) Stream removed, broadcasting: 5 May 3 11:31:47.067: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:31:47.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xmsnm" for this suite. May 3 11:32:11.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:32:11.174: INFO: namespace: e2e-tests-pod-network-test-xmsnm, resource: bindings, ignored listing per whitelist May 3 11:32:11.206: INFO: namespace e2e-tests-pod-network-test-xmsnm deletion completed in 24.133522956s • [SLOW TEST:50.636 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:32:11.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 11:32:11.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-baaeba48-8d31-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-shh7n" to be "success or failure" May 3 11:32:11.323: INFO: Pod "downwardapi-volume-baaeba48-8d31-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358019ms May 3 11:32:13.328: INFO: Pod "downwardapi-volume-baaeba48-8d31-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006838283s May 3 11:32:15.332: INFO: Pod "downwardapi-volume-baaeba48-8d31-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011013674s STEP: Saw pod success May 3 11:32:15.332: INFO: Pod "downwardapi-volume-baaeba48-8d31-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:32:15.335: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-baaeba48-8d31-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 11:32:15.428: INFO: Waiting for pod downwardapi-volume-baaeba48-8d31-11ea-b78d-0242ac110017 to disappear May 3 11:32:15.440: INFO: Pod downwardapi-volume-baaeba48-8d31-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:32:15.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-shh7n" for this suite. May 3 11:32:21.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:32:21.505: INFO: namespace: e2e-tests-downward-api-shh7n, resource: bindings, ignored listing per whitelist May 3 11:32:21.532: INFO: namespace e2e-tests-downward-api-shh7n deletion completed in 6.089071671s • [SLOW TEST:10.326 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:32:21.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-rmr9 STEP: Creating a pod to test atomic-volume-subpath May 3 11:32:21.738: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rmr9" in namespace "e2e-tests-subpath-8dzqk" to be "success or failure" May 3 11:32:21.745: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.28144ms May 3 11:32:23.750: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011776283s May 3 11:32:25.753: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015258653s May 3 11:32:27.758: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019877168s May 3 11:32:29.763: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Running", Reason="", readiness=false. Elapsed: 8.025002145s May 3 11:32:31.768: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Running", Reason="", readiness=false. Elapsed: 10.029885632s May 3 11:32:33.773: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Running", Reason="", readiness=false. Elapsed: 12.034945526s May 3 11:32:35.776: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Running", Reason="", readiness=false. Elapsed: 14.038217133s May 3 11:32:37.779: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Running", Reason="", readiness=false. Elapsed: 16.041227285s May 3 11:32:39.784: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Running", Reason="", readiness=false. Elapsed: 18.045927763s May 3 11:32:41.788: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Running", Reason="", readiness=false. Elapsed: 20.050123179s May 3 11:32:43.793: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Running", Reason="", readiness=false. Elapsed: 22.054447551s May 3 11:32:45.804: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Running", Reason="", readiness=false. Elapsed: 24.065942654s May 3 11:32:47.808: INFO: Pod "pod-subpath-test-configmap-rmr9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.069397441s STEP: Saw pod success May 3 11:32:47.808: INFO: Pod "pod-subpath-test-configmap-rmr9" satisfied condition "success or failure" May 3 11:32:47.810: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-rmr9 container test-container-subpath-configmap-rmr9: STEP: delete the pod May 3 11:32:47.859: INFO: Waiting for pod pod-subpath-test-configmap-rmr9 to disappear May 3 11:32:47.866: INFO: Pod pod-subpath-test-configmap-rmr9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-rmr9 May 3 11:32:47.866: INFO: Deleting pod "pod-subpath-test-configmap-rmr9" in namespace "e2e-tests-subpath-8dzqk" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:32:47.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-8dzqk" for this suite. May 3 11:32:53.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:32:53.918: INFO: namespace: e2e-tests-subpath-8dzqk, resource: bindings, ignored listing per whitelist May 3 11:32:53.962: INFO: namespace e2e-tests-subpath-8dzqk deletion completed in 6.091340269s • [SLOW TEST:32.430 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:32:53.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 3 11:32:58.641: INFO: Successfully updated pod "labelsupdated42a45c4-8d31-11ea-b78d-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:33:02.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-llzff" for this suite. May 3 11:33:16.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:33:16.788: INFO: namespace: e2e-tests-downward-api-llzff, resource: bindings, ignored listing per whitelist May 3 11:33:16.843: INFO: namespace e2e-tests-downward-api-llzff deletion completed in 14.145637371s • [SLOW TEST:22.881 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:33:16.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 3 11:33:16.979: INFO: Waiting up to 5m0s for pod "downward-api-e1cfb7cc-8d31-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-cz7jp" to be "success or failure" May 3 11:33:16.983: INFO: Pod "downward-api-e1cfb7cc-8d31-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040047ms May 3 11:33:19.092: INFO: Pod "downward-api-e1cfb7cc-8d31-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112965319s May 3 11:33:21.097: INFO: Pod "downward-api-e1cfb7cc-8d31-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117952522s STEP: Saw pod success May 3 11:33:21.097: INFO: Pod "downward-api-e1cfb7cc-8d31-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:33:21.100: INFO: Trying to get logs from node hunter-worker2 pod downward-api-e1cfb7cc-8d31-11ea-b78d-0242ac110017 container dapi-container: STEP: delete the pod May 3 11:33:21.183: INFO: Waiting for pod downward-api-e1cfb7cc-8d31-11ea-b78d-0242ac110017 to disappear May 3 11:33:21.186: INFO: Pod downward-api-e1cfb7cc-8d31-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:33:21.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cz7jp" for this suite. May 3 11:33:27.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:33:27.251: INFO: namespace: e2e-tests-downward-api-cz7jp, resource: bindings, ignored listing per whitelist May 3 11:33:27.295: INFO: namespace e2e-tests-downward-api-cz7jp deletion completed in 6.105696042s • [SLOW TEST:10.451 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:33:27.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 3 11:33:27.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-b7rgw' May 3 11:33:29.705: INFO: stderr: "" May 3 11:33:29.706: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 3 11:33:34.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-b7rgw -o json' May 3 11:33:34.859: INFO: stderr: "" May 3 11:33:34.859: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-03T11:33:29Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-b7rgw\",\n \"resourceVersion\": \"8524774\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-b7rgw/pods/e2e-test-nginx-pod\",\n \"uid\": \"e965f07b-8d31-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-czh99\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-czh99\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-czh99\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-03T11:33:29Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-03T11:33:33Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-03T11:33:33Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-03T11:33:29Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b3fb54b3451d93c76bc1aad64e81d523ec1f5f00b454dca8bfe940d020b6689d\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-03T11:33:32Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.32\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-03T11:33:29Z\"\n }\n}\n" STEP: replace the image in the pod May 3 11:33:34.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-b7rgw' May 3 11:33:35.109: INFO: stderr: "" May 3 11:33:35.109: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 3 11:33:35.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-b7rgw' May 3 11:33:41.762: INFO: stderr: "" May 3 11:33:41.762: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:33:41.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b7rgw" for this suite. May 3 11:33:47.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:33:47.872: INFO: namespace: e2e-tests-kubectl-b7rgw, resource: bindings, ignored listing per whitelist May 3 11:33:47.921: INFO: namespace e2e-tests-kubectl-b7rgw deletion completed in 6.133950886s • [SLOW TEST:20.626 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:33:47.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:33:48.015: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:33:52.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-twqmv" for this suite. May 3 11:34:32.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:34:32.151: INFO: namespace: e2e-tests-pods-twqmv, resource: bindings, ignored listing per whitelist May 3 11:34:32.192: INFO: namespace e2e-tests-pods-twqmv deletion completed in 40.091643491s • [SLOW TEST:44.271 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:34:32.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:34:32.280: INFO: Creating deployment "nginx-deployment" May 3 11:34:32.302: INFO: Waiting for observed generation 1 May 3 11:34:34.686: INFO: Waiting for all required pods to come up May 3 11:34:34.690: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 3 11:34:45.347: INFO: Waiting for deployment "nginx-deployment" to complete May 3 11:34:45.364: INFO: Updating deployment "nginx-deployment" with a non-existent image May 3 11:34:45.372: INFO: Updating deployment nginx-deployment May 3 11:34:45.372: INFO: Waiting for observed generation 2 May 3 11:34:47.413: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 3 11:34:47.416: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 3 11:34:47.418: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 3 11:34:47.474: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 3 11:34:47.474: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 3 11:34:47.476: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 3 11:34:47.479: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 3 11:34:47.479: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 3 11:34:47.483: INFO: Updating deployment nginx-deployment May 3 11:34:47.483: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 3 11:34:47.857: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 3 11:34:47.883: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 3 11:34:49.119: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-g9grd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g9grd/deployments/nginx-deployment,UID:0eb48511-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525155,Generation:3,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-03 11:34:46 +0000 UTC 2020-05-03 11:34:32 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-03 11:34:47 +0000 UTC 2020-05-03 11:34:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 3 11:34:49.398: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-g9grd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g9grd/replicasets/nginx-deployment-5c98f8fb5,UID:1682586d-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525192,Generation:3,CreationTimestamp:2020-05-03 11:34:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0eb48511-8d32-11ea-99e8-0242ac110002 0xc001b91567 0xc001b91568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 3 11:34:49.398: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 3 11:34:49.399: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-g9grd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g9grd/replicasets/nginx-deployment-85ddf47c5d,UID:0ebdbbe0-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525193,Generation:3,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0eb48511-8d32-11ea-99e8-0242ac110002 0xc001b91787 0xc001b91788}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 3 11:34:49.552: INFO: Pod "nginx-deployment-5c98f8fb5-4d5gk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4d5gk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-4d5gk,UID:168a0ce3-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525124,Generation:0,CreationTimestamp:2020-05-03 11:34:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc00215d237 0xc00215d238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00215d2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00215d410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-03 11:34:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.552: INFO: Pod "nginx-deployment-5c98f8fb5-4rcnd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4rcnd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-4rcnd,UID:170396df-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525134,Generation:0,CreationTimestamp:2020-05-03 11:34:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc00215d550 0xc00215d551}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00215d690} {node.kubernetes.io/unreachable Exists NoExecute 0xc00215d6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-03 11:34:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.553: INFO: Pod "nginx-deployment-5c98f8fb5-4rf9m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4rf9m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-4rf9m,UID:1687d62f-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525102,Generation:0,CreationTimestamp:2020-05-03 11:34:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc00215d860 0xc00215d861}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00215d8e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00215d900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-03 11:34:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.553: INFO: Pod "nginx-deployment-5c98f8fb5-56k7k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-56k7k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-56k7k,UID:1808fa14-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525170,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc00215dcd0 0xc00215dcd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00215dd50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00215dd70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.553: INFO: Pod "nginx-deployment-5c98f8fb5-5vg72" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5vg72,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-5vg72,UID:168a0b46-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525109,Generation:0,CreationTimestamp:2020-05-03 11:34:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc0021b0027 0xc0021b0028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b0670} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b1d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-03 11:34:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.553: INFO: Pod "nginx-deployment-5c98f8fb5-5xpdh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5xpdh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-5xpdh,UID:18493bb9-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525179,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc0021b1e50 0xc0021b1e51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cc120} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cc280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.554: INFO: Pod "nginx-deployment-5c98f8fb5-6whcf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6whcf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-6whcf,UID:18094cbe-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525175,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc0021cc2f7 0xc0021cc2f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cc570} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cc590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.554: INFO: Pod "nginx-deployment-5c98f8fb5-9j9hl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9j9hl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-9j9hl,UID:1849366d-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525188,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc0021cc607 0xc0021cc608}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cc760} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cc7e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.554: INFO: Pod "nginx-deployment-5c98f8fb5-lppjq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lppjq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-lppjq,UID:1870ea1e-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525190,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc0021cc857 0xc0021cc858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cc990} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cc9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.554: INFO: Pod "nginx-deployment-5c98f8fb5-p92wv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p92wv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-p92wv,UID:18014f5e-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525196,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc0021cca27 0xc0021cca28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021ccb10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021ccb30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-03 11:34:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.554: INFO: Pod "nginx-deployment-5c98f8fb5-sdt9m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sdt9m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-sdt9m,UID:18493ada-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525187,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc0021ccbf0 0xc0021ccbf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021ccc70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021ccc90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.554: INFO: Pod "nginx-deployment-5c98f8fb5-smbct" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-smbct,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-smbct,UID:18494c3e-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525186,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc0021ccd07 0xc0021ccd08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021ccd80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021ccda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.554: INFO: Pod "nginx-deployment-5c98f8fb5-wxkq7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wxkq7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-5c98f8fb5-wxkq7,UID:16f3a9a6-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525128,Generation:0,CreationTimestamp:2020-05-03 11:34:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1682586d-8d32-11ea-99e8-0242ac110002 0xc0021cce17 0xc0021cce18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cce90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cceb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-03 11:34:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.555: INFO: Pod "nginx-deployment-85ddf47c5d-8fqj8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8fqj8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-8fqj8,UID:0ec4e5c8-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525058,Generation:0,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021ccf70 0xc0021ccf71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021ccfe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cd000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.36,StartTime:2020-05-03 11:34:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-03 11:34:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5af55d03a7686136bd6f46a8261a4ec154e26f65fa3344bcc855563a158297a1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.555: INFO: Pod "nginx-deployment-85ddf47c5d-8scn9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8scn9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-8scn9,UID:0ec24bfd-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525039,Generation:0,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cd0c7 0xc0021cd0c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cd140} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cd160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.44,StartTime:2020-05-03 11:34:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-03 11:34:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9fd92cbae99261db31e20203035ae5ebb3397cf0c0f95f2d90652898a14024f7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.555: INFO: Pod "nginx-deployment-85ddf47c5d-bb5lx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bb5lx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-bb5lx,UID:0ec97227-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525064,Generation:0,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cd227 0xc0021cd228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cd2a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cd2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.38,StartTime:2020-05-03 11:34:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-03 11:34:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://661fd8677572141eea16b64f83dfdbba48e1f9a518d4c3040e5e37831a43eaae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.555: INFO: Pod "nginx-deployment-85ddf47c5d-bqh74" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bqh74,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-bqh74,UID:18494c8f-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525182,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cd387 0xc0021cd388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cd400} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cd420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.555: INFO: Pod "nginx-deployment-85ddf47c5d-brj96" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-brj96,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-brj96,UID:184939ac-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525181,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cd497 0xc0021cd498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cd510} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cd530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.555: INFO: Pod "nginx-deployment-85ddf47c5d-j4ztm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j4ztm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-j4ztm,UID:1801677e-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525151,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cd5a7 0xc0021cd5a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cd620} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cd640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.556: INFO: Pod "nginx-deployment-85ddf47c5d-kx98f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kx98f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-kx98f,UID:1849417d-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525184,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cd6b7 0xc0021cd6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cd730} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cd750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.556: INFO: Pod "nginx-deployment-85ddf47c5d-lfcsp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lfcsp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-lfcsp,UID:0ec1cc7e-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525017,Generation:0,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cd7c7 0xc0021cd7c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cd840} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cd860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.34,StartTime:2020-05-03 11:34:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-03 11:34:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://76d9144f3d2416032a830c7a5e1961a443d5c25057d540c5a02af81e463b8aa4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.556: INFO: Pod "nginx-deployment-85ddf47c5d-n9nlv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n9nlv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-n9nlv,UID:18095c1a-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525173,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cd927 0xc0021cd928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cd9a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cd9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.556: INFO: Pod "nginx-deployment-85ddf47c5d-pzbbm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pzbbm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-pzbbm,UID:18094031-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525171,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cda87 0xc0021cda88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cdb30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cdb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.556: INFO: Pod "nginx-deployment-85ddf47c5d-q6g4x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q6g4x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-q6g4x,UID:18096007-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525172,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cdbf7 0xc0021cdbf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cdc70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cdc90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.556: INFO: Pod "nginx-deployment-85ddf47c5d-q76b7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q76b7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-q76b7,UID:0ec4e4f5-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525063,Generation:0,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cdd07 0xc0021cdd08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cde10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cde30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.46,StartTime:2020-05-03 11:34:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-03 11:34:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://598ed798d55f8d1c2858c38d5e710863115a142eb4e407c308225c7d161bdc63}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.557: INFO: Pod "nginx-deployment-85ddf47c5d-qw8mx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qw8mx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-qw8mx,UID:0ec4e430-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525030,Generation:0,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc0021cdef7 0xc0021cdef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021cdfc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021cdfe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.35,StartTime:2020-05-03 11:34:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-03 11:34:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://57b333dda8cf6cc8f0fc9e461d21877f9ba53cda8345d2d66a062c26eceddf6a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.557: INFO: Pod "nginx-deployment-85ddf47c5d-rsd7r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rsd7r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-rsd7r,UID:18495b9a-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525183,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc001d18257 0xc001d18258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d182d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d182f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.557: INFO: Pod "nginx-deployment-85ddf47c5d-rzbsk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rzbsk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-rzbsk,UID:184931ad-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525185,Generation:0,CreationTimestamp:2020-05-03 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc001d183c7 0xc001d183c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d18520} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d18540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.557: INFO: Pod "nginx-deployment-85ddf47c5d-s77bq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s77bq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-s77bq,UID:0ec25760-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525056,Generation:0,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc001d187d7 0xc001d187d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d18890} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d188b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.45,StartTime:2020-05-03 11:34:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-03 11:34:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://406290664897569cb0c3643700eea5f9420dc8c700e0a34a07544b73ed3e25a6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.560: INFO: Pod "nginx-deployment-85ddf47c5d-shcfv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-shcfv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-shcfv,UID:17fcf8b2-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525195,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc001d18c37 0xc001d18c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d18cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d18da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-03 11:34:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.560: INFO: Pod "nginx-deployment-85ddf47c5d-t78mb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t78mb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-t78mb,UID:18095a64-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525174,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc001d18e57 0xc001d18e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d18ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d18fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.560: INFO: Pod "nginx-deployment-85ddf47c5d-ttqc4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ttqc4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-ttqc4,UID:0ec96e57-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525054,Generation:0,CreationTimestamp:2020-05-03 11:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc001d191c7 0xc001d191c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d19240} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d19260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.37,StartTime:2020-05-03 11:34:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-03 11:34:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7edf420d74ae800cb66e63291101472056cdab4fac02de6c2f11e55b71bf4bb3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 3 11:34:49.561: INFO: Pod "nginx-deployment-85ddf47c5d-tzjnq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tzjnq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g9grd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9grd/pods/nginx-deployment-85ddf47c5d-tzjnq,UID:18017052-8d32-11ea-99e8-0242ac110002,ResourceVersion:8525150,Generation:0,CreationTimestamp:2020-05-03 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0ebdbbe0-8d32-11ea-99e8-0242ac110002 0xc001d19477 0xc001d19478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcd2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcd2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hcd2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d194f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d19580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:34:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:34:49.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-g9grd" for this suite. May 3 11:35:15.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:35:15.766: INFO: namespace: e2e-tests-deployment-g9grd, resource: bindings, ignored listing per whitelist May 3 11:35:15.783: INFO: namespace e2e-tests-deployment-g9grd deletion completed in 26.145990171s • [SLOW TEST:43.591 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:35:15.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-28b0031b-8d32-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 11:35:15.893: INFO: Waiting up to 5m0s for pod "pod-secrets-28b20771-8d32-11ea-b78d-0242ac110017" in namespace "e2e-tests-secrets-n878z" to be "success or failure" May 3 11:35:15.956: INFO: Pod "pod-secrets-28b20771-8d32-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 62.294137ms May 3 11:35:17.959: INFO: Pod "pod-secrets-28b20771-8d32-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065600767s May 3 11:35:19.963: INFO: Pod "pod-secrets-28b20771-8d32-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070024394s STEP: Saw pod success May 3 11:35:19.963: INFO: Pod "pod-secrets-28b20771-8d32-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:35:19.966: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-28b20771-8d32-11ea-b78d-0242ac110017 container secret-volume-test: STEP: delete the pod May 3 11:35:20.005: INFO: Waiting for pod pod-secrets-28b20771-8d32-11ea-b78d-0242ac110017 to disappear May 3 11:35:20.017: INFO: Pod pod-secrets-28b20771-8d32-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:35:20.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-n878z" for this suite. May 3 11:35:26.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:35:26.083: INFO: namespace: e2e-tests-secrets-n878z, resource: bindings, ignored listing per whitelist May 3 11:35:26.166: INFO: namespace e2e-tests-secrets-n878z deletion completed in 6.145622249s • [SLOW TEST:10.382 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:35:26.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 3 11:35:30.839: INFO: Successfully updated pod "labelsupdate2ee38f2d-8d32-11ea-b78d-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:35:32.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pjnnx" for this suite. May 3 11:35:57.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:35:57.041: INFO: namespace: e2e-tests-projected-pjnnx, resource: bindings, ignored listing per whitelist May 3 11:35:57.103: INFO: namespace e2e-tests-projected-pjnnx deletion completed in 24.20338893s • [SLOW TEST:30.937 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:35:57.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 3 11:36:01.732: INFO: Successfully updated pod "pod-update-41510a52-8d32-11ea-b78d-0242ac110017" STEP: verifying the updated pod is in kubernetes May 3 11:36:01.738: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:36:01.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-xjntt" for this suite. May 3 11:36:25.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:36:25.820: INFO: namespace: e2e-tests-pods-xjntt, resource: bindings, ignored listing per whitelist May 3 11:36:25.870: INFO: namespace e2e-tests-pods-xjntt deletion completed in 24.128578235s • [SLOW TEST:28.767 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:36:25.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 11:36:25.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-527a0100-8d32-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-sx6r6" to be "success or failure" May 3 11:36:25.994: INFO: Pod "downwardapi-volume-527a0100-8d32-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211272ms May 3 11:36:28.100: INFO: Pod "downwardapi-volume-527a0100-8d32-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112224484s May 3 11:36:30.104: INFO: Pod "downwardapi-volume-527a0100-8d32-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116163133s May 3 11:36:32.108: INFO: Pod "downwardapi-volume-527a0100-8d32-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12004016s STEP: Saw pod success May 3 11:36:32.108: INFO: Pod "downwardapi-volume-527a0100-8d32-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:36:32.111: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-527a0100-8d32-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 11:36:32.144: INFO: Waiting for pod downwardapi-volume-527a0100-8d32-11ea-b78d-0242ac110017 to disappear May 3 11:36:32.178: INFO: Pod downwardapi-volume-527a0100-8d32-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:36:32.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sx6r6" for this suite. May 3 11:36:38.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:36:38.255: INFO: namespace: e2e-tests-projected-sx6r6, resource: bindings, ignored listing per whitelist May 3 11:36:38.276: INFO: namespace e2e-tests-projected-sx6r6 deletion completed in 6.09379814s • [SLOW TEST:12.406 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:36:38.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 3 11:36:42.431: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-59d92749-8d32-11ea-b78d-0242ac110017", GenerateName:"", Namespace:"e2e-tests-pods-rqf9c", SelfLink:"/api/v1/namespaces/e2e-tests-pods-rqf9c/pods/pod-submit-remove-59d92749-8d32-11ea-b78d-0242ac110017", UID:"59da644f-8d32-11ea-99e8-0242ac110002", ResourceVersion:"8525729", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724102598, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"350635719"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pz2bw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0017b2cc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pz2bw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001fbfbd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000dc5140), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fbfc40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fbfc70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001fbfc78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001fbfc7c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724102598, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724102601, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724102601, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724102598, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.66", StartTime:(*v1.Time)(0xc0017278a0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0017278c0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://05b9767d3173608c892d27ed6b50a2975c7b2ec249a66f0503850e7ae83c7985"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:36:51.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rqf9c" for this suite. May 3 11:36:57.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:36:57.374: INFO: namespace: e2e-tests-pods-rqf9c, resource: bindings, ignored listing per whitelist May 3 11:36:57.437: INFO: namespace e2e-tests-pods-rqf9c deletion completed in 6.12272096s • [SLOW TEST:19.162 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:36:57.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:37:01.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-q4x7j" for this suite. May 3 11:37:43.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:37:43.675: INFO: namespace: e2e-tests-kubelet-test-q4x7j, resource: bindings, ignored listing per whitelist May 3 11:37:43.695: INFO: namespace e2e-tests-kubelet-test-q4x7j deletion completed in 42.113452075s • [SLOW TEST:46.257 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:37:43.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0503 11:37:53.838916 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 3 11:37:53.838: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:37:53.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-sgs8z" for this suite. May 3 11:37:59.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:37:59.920: INFO: namespace: e2e-tests-gc-sgs8z, resource: bindings, ignored listing per whitelist May 3 11:37:59.934: INFO: namespace e2e-tests-gc-sgs8z deletion completed in 6.092133652s • [SLOW TEST:16.239 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:37:59.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-8a8f8716-8d32-11ea-b78d-0242ac110017 STEP: Creating secret with name secret-projected-all-test-volume-8a8f86f6-8d32-11ea-b78d-0242ac110017 STEP: Creating a pod to test Check all projections for projected volume plugin May 3 11:38:00.136: INFO: Waiting up to 5m0s for pod "projected-volume-8a8f8693-8d32-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-cbwqs" to be "success or failure" May 3 11:38:00.138: INFO: Pod "projected-volume-8a8f8693-8d32-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.781743ms May 3 11:38:02.162: INFO: Pod "projected-volume-8a8f8693-8d32-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026736611s May 3 11:38:04.167: INFO: Pod "projected-volume-8a8f8693-8d32-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031123313s STEP: Saw pod success May 3 11:38:04.167: INFO: Pod "projected-volume-8a8f8693-8d32-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:38:04.170: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-8a8f8693-8d32-11ea-b78d-0242ac110017 container projected-all-volume-test: STEP: delete the pod May 3 11:38:04.210: INFO: Waiting for pod projected-volume-8a8f8693-8d32-11ea-b78d-0242ac110017 to disappear May 3 11:38:04.235: INFO: Pod projected-volume-8a8f8693-8d32-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:38:04.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cbwqs" for this suite. May 3 11:38:10.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:38:10.416: INFO: namespace: e2e-tests-projected-cbwqs, resource: bindings, ignored listing per whitelist May 3 11:38:10.430: INFO: namespace e2e-tests-projected-cbwqs deletion completed in 6.1908086s • [SLOW TEST:10.496 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:38:10.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:39:10.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jk782" for this suite. May 3 11:39:32.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:39:32.644: INFO: namespace: e2e-tests-container-probe-jk782, resource: bindings, ignored listing per whitelist May 3 11:39:32.704: INFO: namespace e2e-tests-container-probe-jk782 deletion completed in 22.088048347s • [SLOW TEST:82.273 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:39:32.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:39:38.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-vksmc" for this suite. May 3 11:40:24.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:40:24.984: INFO: namespace: e2e-tests-kubelet-test-vksmc, resource: bindings, ignored listing per whitelist May 3 11:40:25.033: INFO: namespace e2e-tests-kubelet-test-vksmc deletion completed in 46.129690862s • [SLOW TEST:52.329 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:40:25.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 3 11:40:29.212: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:40:53.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-7hp8f" for this suite. May 3 11:40:59.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:40:59.447: INFO: namespace: e2e-tests-namespaces-7hp8f, resource: bindings, ignored listing per whitelist May 3 11:40:59.520: INFO: namespace e2e-tests-namespaces-7hp8f deletion completed in 6.089605713s STEP: Destroying namespace "e2e-tests-nsdeletetest-57jp5" for this suite. May 3 11:40:59.522: INFO: Namespace e2e-tests-nsdeletetest-57jp5 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-d9qfn" for this suite. May 3 11:41:05.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:41:05.636: INFO: namespace: e2e-tests-nsdeletetest-d9qfn, resource: bindings, ignored listing per whitelist May 3 11:41:05.664: INFO: namespace e2e-tests-nsdeletetest-d9qfn deletion completed in 6.142207688s • [SLOW TEST:40.631 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:41:05.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:41:33.803: INFO: Container started at 2020-05-03 11:41:09 +0000 UTC, pod became ready at 2020-05-03 11:41:33 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:41:33.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wjzq4" for this suite. May 3 11:41:57.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:41:57.863: INFO: namespace: e2e-tests-container-probe-wjzq4, resource: bindings, ignored listing per whitelist May 3 11:41:57.918: INFO: namespace e2e-tests-container-probe-wjzq4 deletion completed in 24.111944388s • [SLOW TEST:52.254 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:41:57.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-186cdaab-8d33-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:41:58.110: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-186e97b7-8d33-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-xlp6t" to be "success or failure" May 3 11:41:58.155: INFO: Pod "pod-projected-configmaps-186e97b7-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 44.190354ms May 3 11:42:00.158: INFO: Pod "pod-projected-configmaps-186e97b7-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048027584s May 3 11:42:02.163: INFO: Pod "pod-projected-configmaps-186e97b7-8d33-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052176156s STEP: Saw pod success May 3 11:42:02.163: INFO: Pod "pod-projected-configmaps-186e97b7-8d33-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:42:02.165: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-186e97b7-8d33-11ea-b78d-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 3 11:42:02.298: INFO: Waiting for pod pod-projected-configmaps-186e97b7-8d33-11ea-b78d-0242ac110017 to disappear May 3 11:42:02.379: INFO: Pod pod-projected-configmaps-186e97b7-8d33-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:42:02.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xlp6t" for this suite. May 3 11:42:08.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:42:08.436: INFO: namespace: e2e-tests-projected-xlp6t, resource: bindings, ignored listing per whitelist May 3 11:42:08.609: INFO: namespace e2e-tests-projected-xlp6t deletion completed in 6.226980401s • [SLOW TEST:10.690 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:42:08.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1ed09a21-8d33-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:42:08.837: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1ed235f5-8d33-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-ncf9g" to be "success or failure" May 3 11:42:08.907: INFO: Pod "pod-projected-configmaps-1ed235f5-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 70.18871ms May 3 11:42:10.911: INFO: Pod "pod-projected-configmaps-1ed235f5-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074593548s May 3 11:42:12.915: INFO: Pod "pod-projected-configmaps-1ed235f5-8d33-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.078352201s May 3 11:42:14.919: INFO: Pod "pod-projected-configmaps-1ed235f5-8d33-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082172075s STEP: Saw pod success May 3 11:42:14.919: INFO: Pod "pod-projected-configmaps-1ed235f5-8d33-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:42:14.922: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-1ed235f5-8d33-11ea-b78d-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 3 11:42:15.116: INFO: Waiting for pod pod-projected-configmaps-1ed235f5-8d33-11ea-b78d-0242ac110017 to disappear May 3 11:42:15.272: INFO: Pod pod-projected-configmaps-1ed235f5-8d33-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:42:15.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ncf9g" for this suite. May 3 11:42:21.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:42:21.357: INFO: namespace: e2e-tests-projected-ncf9g, resource: bindings, ignored listing per whitelist May 3 11:42:21.364: INFO: namespace e2e-tests-projected-ncf9g deletion completed in 6.087645705s • [SLOW TEST:12.755 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:42:21.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-hd8pv/configmap-test-266b391e-8d33-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:42:21.603: INFO: Waiting up to 5m0s for pod "pod-configmaps-266d47aa-8d33-11ea-b78d-0242ac110017" in namespace "e2e-tests-configmap-hd8pv" to be "success or failure" May 3 11:42:21.621: INFO: Pod "pod-configmaps-266d47aa-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.921771ms May 3 11:42:23.968: INFO: Pod "pod-configmaps-266d47aa-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365537669s May 3 11:42:25.972: INFO: Pod "pod-configmaps-266d47aa-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369277077s May 3 11:42:27.976: INFO: Pod "pod-configmaps-266d47aa-8d33-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.37312456s STEP: Saw pod success May 3 11:42:27.976: INFO: Pod "pod-configmaps-266d47aa-8d33-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:42:27.979: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-266d47aa-8d33-11ea-b78d-0242ac110017 container env-test: STEP: delete the pod May 3 11:42:28.249: INFO: Waiting for pod pod-configmaps-266d47aa-8d33-11ea-b78d-0242ac110017 to disappear May 3 11:42:28.440: INFO: Pod pod-configmaps-266d47aa-8d33-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:42:28.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hd8pv" for this suite. May 3 11:42:34.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:42:34.488: INFO: namespace: e2e-tests-configmap-hd8pv, resource: bindings, ignored listing per whitelist May 3 11:42:34.537: INFO: namespace e2e-tests-configmap-hd8pv deletion completed in 6.092343162s • [SLOW TEST:13.173 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:42:34.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-f95jg STEP: creating a selector STEP: Creating the service pods in kubernetes May 3 11:42:34.724: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 3 11:43:06.927: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.75:8080/dial?request=hostName&protocol=http&host=10.244.1.74&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-f95jg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 11:43:06.927: INFO: >>> kubeConfig: /root/.kube/config I0503 11:43:06.957734 6 log.go:172] (0xc001e7c2c0) (0xc0025c0aa0) Create stream I0503 11:43:06.957764 6 log.go:172] (0xc001e7c2c0) (0xc0025c0aa0) Stream added, broadcasting: 1 I0503 11:43:06.959980 6 log.go:172] (0xc001e7c2c0) Reply frame received for 1 I0503 11:43:06.960018 6 log.go:172] (0xc001e7c2c0) (0xc0007e9680) Create stream I0503 11:43:06.960031 6 log.go:172] (0xc001e7c2c0) (0xc0007e9680) Stream added, broadcasting: 3 I0503 11:43:06.960920 6 log.go:172] (0xc001e7c2c0) Reply frame received for 3 I0503 11:43:06.960976 6 log.go:172] (0xc001e7c2c0) (0xc000b72d20) Create stream I0503 11:43:06.960996 6 log.go:172] (0xc001e7c2c0) (0xc000b72d20) Stream added, broadcasting: 5 I0503 11:43:06.962253 6 log.go:172] (0xc001e7c2c0) Reply frame received for 5 I0503 11:43:07.060901 6 log.go:172] (0xc001e7c2c0) Data frame received for 3 I0503 11:43:07.060932 6 log.go:172] (0xc0007e9680) (3) Data frame handling I0503 11:43:07.060947 6 log.go:172] (0xc0007e9680) (3) Data frame sent I0503 11:43:07.062054 6 log.go:172] (0xc001e7c2c0) Data frame received for 5 I0503 11:43:07.062091 6 log.go:172] (0xc000b72d20) (5) Data frame handling I0503 11:43:07.062184 6 log.go:172] (0xc001e7c2c0) Data frame received for 3 I0503 11:43:07.062202 6 log.go:172] (0xc0007e9680) (3) Data frame handling I0503 11:43:07.063816 6 log.go:172] (0xc001e7c2c0) Data frame received for 1 I0503 11:43:07.063841 6 log.go:172] (0xc0025c0aa0) (1) Data frame handling I0503 11:43:07.063866 6 log.go:172] (0xc0025c0aa0) (1) Data frame sent I0503 11:43:07.063890 6 log.go:172] (0xc001e7c2c0) (0xc0025c0aa0) Stream removed, broadcasting: 1 I0503 11:43:07.063903 6 log.go:172] (0xc001e7c2c0) Go away received I0503 11:43:07.063999 6 log.go:172] (0xc001e7c2c0) (0xc0025c0aa0) Stream removed, broadcasting: 1 I0503 11:43:07.064019 6 log.go:172] (0xc001e7c2c0) (0xc0007e9680) Stream removed, broadcasting: 3 I0503 11:43:07.064036 6 log.go:172] (0xc001e7c2c0) (0xc000b72d20) Stream removed, broadcasting: 5 May 3 11:43:07.064: INFO: Waiting for endpoints: map[] May 3 11:43:07.067: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.75:8080/dial?request=hostName&protocol=http&host=10.244.2.56&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-f95jg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 11:43:07.067: INFO: >>> kubeConfig: /root/.kube/config I0503 11:43:07.117503 6 log.go:172] (0xc001e7c790) (0xc0025c0d20) Create stream I0503 11:43:07.117551 6 log.go:172] (0xc001e7c790) (0xc0025c0d20) Stream added, broadcasting: 1 I0503 11:43:07.120805 6 log.go:172] (0xc001e7c790) Reply frame received for 1 I0503 11:43:07.120842 6 log.go:172] (0xc001e7c790) (0xc0007e9720) Create stream I0503 11:43:07.120854 6 log.go:172] (0xc001e7c790) (0xc0007e9720) Stream added, broadcasting: 3 I0503 11:43:07.121944 6 log.go:172] (0xc001e7c790) Reply frame received for 3 I0503 11:43:07.122011 6 log.go:172] (0xc001e7c790) (0xc00192ba40) Create stream I0503 11:43:07.122032 6 log.go:172] (0xc001e7c790) (0xc00192ba40) Stream added, broadcasting: 5 I0503 11:43:07.122862 6 log.go:172] (0xc001e7c790) Reply frame received for 5 I0503 11:43:07.190313 6 log.go:172] (0xc001e7c790) Data frame received for 3 I0503 11:43:07.190357 6 log.go:172] (0xc0007e9720) (3) Data frame handling I0503 11:43:07.190390 6 log.go:172] (0xc0007e9720) (3) Data frame sent I0503 11:43:07.190721 6 log.go:172] (0xc001e7c790) Data frame received for 3 I0503 11:43:07.190748 6 log.go:172] (0xc0007e9720) (3) Data frame handling I0503 11:43:07.190772 6 log.go:172] (0xc001e7c790) Data frame received for 5 I0503 11:43:07.190786 6 log.go:172] (0xc00192ba40) (5) Data frame handling I0503 11:43:07.192376 6 log.go:172] (0xc001e7c790) Data frame received for 1 I0503 11:43:07.192389 6 log.go:172] (0xc0025c0d20) (1) Data frame handling I0503 11:43:07.192422 6 log.go:172] (0xc0025c0d20) (1) Data frame sent I0503 11:43:07.192442 6 log.go:172] (0xc001e7c790) (0xc0025c0d20) Stream removed, broadcasting: 1 I0503 11:43:07.192514 6 log.go:172] (0xc001e7c790) (0xc0025c0d20) Stream removed, broadcasting: 1 I0503 11:43:07.192533 6 log.go:172] (0xc001e7c790) (0xc0007e9720) Stream removed, broadcasting: 3 I0503 11:43:07.192590 6 log.go:172] (0xc001e7c790) Go away received I0503 11:43:07.192639 6 log.go:172] (0xc001e7c790) (0xc00192ba40) Stream removed, broadcasting: 5 May 3 11:43:07.192: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:43:07.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-f95jg" for this suite. May 3 11:43:31.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:43:31.275: INFO: namespace: e2e-tests-pod-network-test-f95jg, resource: bindings, ignored listing per whitelist May 3 11:43:31.283: INFO: namespace e2e-tests-pod-network-test-f95jg deletion completed in 24.086233648s • [SLOW TEST:56.746 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:43:31.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-502c654f-8d33-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 11:43:31.658: INFO: Waiting up to 5m0s for pod "pod-secrets-5030d13c-8d33-11ea-b78d-0242ac110017" in namespace "e2e-tests-secrets-4xlsj" to be "success or failure" May 3 11:43:31.682: INFO: Pod "pod-secrets-5030d13c-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 24.762913ms May 3 11:43:33.717: INFO: Pod "pod-secrets-5030d13c-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059397269s May 3 11:43:35.726: INFO: Pod "pod-secrets-5030d13c-8d33-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068447231s STEP: Saw pod success May 3 11:43:35.726: INFO: Pod "pod-secrets-5030d13c-8d33-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:43:35.728: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5030d13c-8d33-11ea-b78d-0242ac110017 container secret-volume-test: STEP: delete the pod May 3 11:43:35.939: INFO: Waiting for pod pod-secrets-5030d13c-8d33-11ea-b78d-0242ac110017 to disappear May 3 11:43:35.967: INFO: Pod pod-secrets-5030d13c-8d33-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:43:35.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4xlsj" for this suite. May 3 11:43:42.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:43:42.196: INFO: namespace: e2e-tests-secrets-4xlsj, resource: bindings, ignored listing per whitelist May 3 11:43:42.201: INFO: namespace e2e-tests-secrets-4xlsj deletion completed in 6.229767852s • [SLOW TEST:10.917 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:43:42.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:43:42.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-nnflx" for this suite. May 3 11:44:04.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:44:04.458: INFO: namespace: e2e-tests-pods-nnflx, resource: bindings, ignored listing per whitelist May 3 11:44:04.515: INFO: namespace e2e-tests-pods-nnflx deletion completed in 22.168661459s • [SLOW TEST:22.314 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:44:04.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 11:44:04.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63df757e-8d33-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-dl9d9" to be "success or failure" May 3 11:44:04.699: INFO: Pod "downwardapi-volume-63df757e-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.131214ms May 3 11:44:06.702: INFO: Pod "downwardapi-volume-63df757e-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025838954s May 3 11:44:08.790: INFO: Pod "downwardapi-volume-63df757e-8d33-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11323285s STEP: Saw pod success May 3 11:44:08.790: INFO: Pod "downwardapi-volume-63df757e-8d33-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:44:08.793: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-63df757e-8d33-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 11:44:08.966: INFO: Waiting for pod downwardapi-volume-63df757e-8d33-11ea-b78d-0242ac110017 to disappear May 3 11:44:08.998: INFO: Pod downwardapi-volume-63df757e-8d33-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:44:08.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dl9d9" for this suite. May 3 11:44:15.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:44:15.039: INFO: namespace: e2e-tests-projected-dl9d9, resource: bindings, ignored listing per whitelist May 3 11:44:15.099: INFO: namespace e2e-tests-projected-dl9d9 deletion completed in 6.096850896s • [SLOW TEST:10.583 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:44:15.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 3 11:44:15.191: INFO: Waiting up to 5m0s for pod "client-containers-6a239689-8d33-11ea-b78d-0242ac110017" in namespace "e2e-tests-containers-8fkg6" to be "success or failure" May 3 11:44:15.195: INFO: Pod "client-containers-6a239689-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598696ms May 3 11:44:17.200: INFO: Pod "client-containers-6a239689-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008173543s May 3 11:44:19.204: INFO: Pod "client-containers-6a239689-8d33-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.012672336s May 3 11:44:21.208: INFO: Pod "client-containers-6a239689-8d33-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016821602s STEP: Saw pod success May 3 11:44:21.208: INFO: Pod "client-containers-6a239689-8d33-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:44:21.212: INFO: Trying to get logs from node hunter-worker2 pod client-containers-6a239689-8d33-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 11:44:21.240: INFO: Waiting for pod client-containers-6a239689-8d33-11ea-b78d-0242ac110017 to disappear May 3 11:44:21.244: INFO: Pod client-containers-6a239689-8d33-11ea-b78d-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:44:21.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8fkg6" for this suite. May 3 11:44:27.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:44:27.296: INFO: namespace: e2e-tests-containers-8fkg6, resource: bindings, ignored listing per whitelist May 3 11:44:27.342: INFO: namespace e2e-tests-containers-8fkg6 deletion completed in 6.093571499s • [SLOW TEST:12.243 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:44:27.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:44:27.490: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 3 11:44:27.497: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:27.499: INFO: Number of nodes with available pods: 0 May 3 11:44:27.499: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:28.504: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:28.508: INFO: Number of nodes with available pods: 0 May 3 11:44:28.508: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:29.504: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:29.508: INFO: Number of nodes with available pods: 0 May 3 11:44:29.508: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:30.724: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:30.727: INFO: Number of nodes with available pods: 0 May 3 11:44:30.727: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:31.515: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:31.519: INFO: Number of nodes with available pods: 0 May 3 11:44:31.519: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:32.505: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:32.509: INFO: Number of nodes with available pods: 1 May 3 11:44:32.509: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:33.504: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:33.507: INFO: Number of nodes with available pods: 2 May 3 11:44:33.507: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 3 11:44:33.704: INFO: Wrong image for pod: daemon-set-6n6dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:33.704: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:33.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:35.042: INFO: Wrong image for pod: daemon-set-6n6dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:35.042: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:35.046: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:35.872: INFO: Wrong image for pod: daemon-set-6n6dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:35.872: INFO: Pod daemon-set-6n6dr is not available May 3 11:44:35.872: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:35.874: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:36.872: INFO: Pod daemon-set-68jjt is not available May 3 11:44:36.872: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:36.875: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:38.027: INFO: Pod daemon-set-68jjt is not available May 3 11:44:38.027: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:38.032: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:39.041: INFO: Pod daemon-set-68jjt is not available May 3 11:44:39.041: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:39.044: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:39.872: INFO: Pod daemon-set-68jjt is not available May 3 11:44:39.872: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:39.875: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:40.872: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:40.876: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:41.872: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:41.877: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:42.873: INFO: Wrong image for pod: daemon-set-j95ls. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 3 11:44:42.873: INFO: Pod daemon-set-j95ls is not available May 3 11:44:42.877: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:43.886: INFO: Pod daemon-set-6dkpf is not available May 3 11:44:43.889: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 3 11:44:43.891: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:43.894: INFO: Number of nodes with available pods: 1 May 3 11:44:43.894: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:44.910: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:44.914: INFO: Number of nodes with available pods: 1 May 3 11:44:44.914: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:45.898: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:45.902: INFO: Number of nodes with available pods: 1 May 3 11:44:45.902: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:46.916: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:46.920: INFO: Number of nodes with available pods: 1 May 3 11:44:46.920: INFO: Node hunter-worker is running more than one daemon pod May 3 11:44:47.899: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 11:44:47.902: INFO: Number of nodes with available pods: 2 May 3 11:44:47.902: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-pxc84, will wait for the garbage collector to delete the pods May 3 11:44:47.975: INFO: Deleting DaemonSet.extensions daemon-set took: 6.128521ms May 3 11:44:48.075: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.214676ms May 3 11:45:01.377: INFO: Number of nodes with available pods: 0 May 3 11:45:01.377: INFO: Number of running nodes: 0, number of available pods: 0 May 3 11:45:01.411: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pxc84/daemonsets","resourceVersion":"8527260"},"items":null} May 3 11:45:01.413: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pxc84/pods","resourceVersion":"8527260"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:45:01.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-pxc84" for this suite. May 3 11:45:07.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:45:07.498: INFO: namespace: e2e-tests-daemonsets-pxc84, resource: bindings, ignored listing per whitelist May 3 11:45:07.528: INFO: namespace e2e-tests-daemonsets-pxc84 deletion completed in 6.104324556s • [SLOW TEST:40.186 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:45:07.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 3 11:45:07.660: INFO: Waiting up to 5m0s for pod "downward-api-8967bfab-8d33-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-cw72b" to be "success or failure" May 3 11:45:07.705: INFO: Pod "downward-api-8967bfab-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 45.050947ms May 3 11:45:09.787: INFO: Pod "downward-api-8967bfab-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126619616s May 3 11:45:11.791: INFO: Pod "downward-api-8967bfab-8d33-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130737835s STEP: Saw pod success May 3 11:45:11.791: INFO: Pod "downward-api-8967bfab-8d33-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:45:11.793: INFO: Trying to get logs from node hunter-worker2 pod downward-api-8967bfab-8d33-11ea-b78d-0242ac110017 container dapi-container: STEP: delete the pod May 3 11:45:12.051: INFO: Waiting for pod downward-api-8967bfab-8d33-11ea-b78d-0242ac110017 to disappear May 3 11:45:12.199: INFO: Pod downward-api-8967bfab-8d33-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:45:12.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cw72b" for this suite. May 3 11:45:18.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:45:18.287: INFO: namespace: e2e-tests-downward-api-cw72b, resource: bindings, ignored listing per whitelist May 3 11:45:18.304: INFO: namespace e2e-tests-downward-api-cw72b deletion completed in 6.101531926s • [SLOW TEST:10.776 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:45:18.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0503 11:45:59.451740 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 3 11:45:59.451: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:45:59.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-phwtg" for this suite. May 3 11:46:11.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:46:11.571: INFO: namespace: e2e-tests-gc-phwtg, resource: bindings, ignored listing per whitelist May 3 11:46:11.580: INFO: namespace e2e-tests-gc-phwtg deletion completed in 12.125838845s • [SLOW TEST:53.275 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:46:11.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 3 11:46:11.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jqznc' May 3 11:46:14.955: INFO: stderr: "" May 3 11:46:14.955: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 3 11:46:15.959: INFO: Selector matched 1 pods for map[app:redis] May 3 11:46:15.959: INFO: Found 0 / 1 May 3 11:46:16.959: INFO: Selector matched 1 pods for map[app:redis] May 3 11:46:16.959: INFO: Found 0 / 1 May 3 11:46:17.959: INFO: Selector matched 1 pods for map[app:redis] May 3 11:46:17.959: INFO: Found 0 / 1 May 3 11:46:18.959: INFO: Selector matched 1 pods for map[app:redis] May 3 11:46:18.959: INFO: Found 1 / 1 May 3 11:46:18.959: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 3 11:46:18.962: INFO: Selector matched 1 pods for map[app:redis] May 3 11:46:18.962: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 3 11:46:18.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-wplcd --namespace=e2e-tests-kubectl-jqznc -p {"metadata":{"annotations":{"x":"y"}}}' May 3 11:46:19.073: INFO: stderr: "" May 3 11:46:19.073: INFO: stdout: "pod/redis-master-wplcd patched\n" STEP: checking annotations May 3 11:46:19.174: INFO: Selector matched 1 pods for map[app:redis] May 3 11:46:19.174: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:46:19.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jqznc" for this suite. May 3 11:46:41.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:46:41.266: INFO: namespace: e2e-tests-kubectl-jqznc, resource: bindings, ignored listing per whitelist May 3 11:46:41.275: INFO: namespace e2e-tests-kubectl-jqznc deletion completed in 22.097298441s • [SLOW TEST:29.695 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:46:41.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:46:41.398: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 3 11:46:46.403: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 3 11:46:46.403: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 3 11:46:46.444: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-cccc2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cccc2/deployments/test-cleanup-deployment,UID:c447ac2e-8d33-11ea-99e8-0242ac110002,ResourceVersion:8527751,Generation:1,CreationTimestamp:2020-05-03 11:46:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 3 11:46:46.453: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 3 11:46:46.453: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 3 11:46:46.453: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-cccc2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cccc2/replicasets/test-cleanup-controller,UID:c1492932-8d33-11ea-99e8-0242ac110002,ResourceVersion:8527752,Generation:1,CreationTimestamp:2020-05-03 11:46:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c447ac2e-8d33-11ea-99e8-0242ac110002 0xc001b90507 0xc001b90508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 3 11:46:46.460: INFO: Pod "test-cleanup-controller-dc59n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-dc59n,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-cccc2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cccc2/pods/test-cleanup-controller-dc59n,UID:c14b7d41-8d33-11ea-99e8-0242ac110002,ResourceVersion:8527743,Generation:0,CreationTimestamp:2020-05-03 11:46:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c1492932-8d33-11ea-99e8-0242ac110002 0xc0021b15c7 0xc0021b15c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xvj4s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xvj4s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xvj4s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b1640} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b1660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:46:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:46:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:46:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:46:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.84,StartTime:2020-05-03 11:46:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-03 11:46:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://689998cdb610a784c68bd5d91c7235a2eb4e1baea033802a098911ea96e3df46}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:46:46.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-cccc2" for this suite. May 3 11:46:52.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:46:52.599: INFO: namespace: e2e-tests-deployment-cccc2, resource: bindings, ignored listing per whitelist May 3 11:46:52.694: INFO: namespace e2e-tests-deployment-cccc2 deletion completed in 6.164011282s • [SLOW TEST:11.419 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:46:52.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-lr7h4/configmap-test-c81c3516-8d33-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:46:52.848: INFO: Waiting up to 5m0s for pod "pod-configmaps-c81ce07f-8d33-11ea-b78d-0242ac110017" in namespace "e2e-tests-configmap-lr7h4" to be "success or failure" May 3 11:46:52.865: INFO: Pod "pod-configmaps-c81ce07f-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.81914ms May 3 11:46:54.869: INFO: Pod "pod-configmaps-c81ce07f-8d33-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020674202s May 3 11:46:56.872: INFO: Pod "pod-configmaps-c81ce07f-8d33-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02355021s STEP: Saw pod success May 3 11:46:56.872: INFO: Pod "pod-configmaps-c81ce07f-8d33-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:46:56.874: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c81ce07f-8d33-11ea-b78d-0242ac110017 container env-test: STEP: delete the pod May 3 11:46:56.889: INFO: Waiting for pod pod-configmaps-c81ce07f-8d33-11ea-b78d-0242ac110017 to disappear May 3 11:46:56.894: INFO: Pod pod-configmaps-c81ce07f-8d33-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:46:56.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lr7h4" for this suite. May 3 11:47:02.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:47:02.993: INFO: namespace: e2e-tests-configmap-lr7h4, resource: bindings, ignored listing per whitelist May 3 11:47:03.012: INFO: namespace e2e-tests-configmap-lr7h4 deletion completed in 6.115492503s • [SLOW TEST:10.318 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:47:03.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 3 11:47:03.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-qs5mf run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 3 11:47:06.168: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0503 11:47:06.102694 1207 log.go:172] (0xc0007b0420) (0xc000710140) Create stream\nI0503 11:47:06.102752 1207 log.go:172] (0xc0007b0420) (0xc000710140) Stream added, broadcasting: 1\nI0503 11:47:06.104835 1207 log.go:172] (0xc0007b0420) Reply frame received for 1\nI0503 11:47:06.104884 1207 log.go:172] (0xc0007b0420) (0xc000668140) Create stream\nI0503 11:47:06.104903 1207 log.go:172] (0xc0007b0420) (0xc000668140) Stream added, broadcasting: 3\nI0503 11:47:06.106131 1207 log.go:172] (0xc0007b0420) Reply frame received for 3\nI0503 11:47:06.106199 1207 log.go:172] (0xc0007b0420) (0xc0005cbc20) Create stream\nI0503 11:47:06.106214 1207 log.go:172] (0xc0007b0420) (0xc0005cbc20) Stream added, broadcasting: 5\nI0503 11:47:06.107218 1207 log.go:172] (0xc0007b0420) Reply frame received for 5\nI0503 11:47:06.107247 1207 log.go:172] (0xc0007b0420) (0xc0006681e0) Create stream\nI0503 11:47:06.107257 1207 log.go:172] (0xc0007b0420) (0xc0006681e0) Stream added, broadcasting: 7\nI0503 11:47:06.108243 1207 log.go:172] (0xc0007b0420) Reply frame received for 7\nI0503 11:47:06.108414 1207 log.go:172] (0xc000668140) (3) Writing data frame\nI0503 11:47:06.108564 1207 log.go:172] (0xc000668140) (3) Writing data frame\nI0503 11:47:06.109552 1207 log.go:172] (0xc0007b0420) Data frame received for 5\nI0503 11:47:06.109573 1207 log.go:172] (0xc0005cbc20) (5) Data frame handling\nI0503 11:47:06.109595 1207 log.go:172] (0xc0005cbc20) (5) Data frame sent\nI0503 11:47:06.110430 1207 log.go:172] (0xc0007b0420) Data frame received for 5\nI0503 11:47:06.110457 1207 log.go:172] (0xc0005cbc20) (5) Data frame handling\nI0503 11:47:06.110490 1207 log.go:172] (0xc0005cbc20) (5) Data frame sent\nI0503 11:47:06.144241 1207 log.go:172] (0xc0007b0420) Data frame received for 5\nI0503 11:47:06.144292 1207 log.go:172] (0xc0005cbc20) (5) Data frame handling\nI0503 11:47:06.144324 1207 log.go:172] (0xc0007b0420) Data frame received for 7\nI0503 11:47:06.144351 1207 log.go:172] (0xc0006681e0) (7) Data frame handling\nI0503 11:47:06.144838 1207 log.go:172] (0xc0007b0420) Data frame received for 1\nI0503 11:47:06.144903 1207 log.go:172] (0xc000710140) (1) Data frame handling\nI0503 11:47:06.145065 1207 log.go:172] (0xc000710140) (1) Data frame sent\nI0503 11:47:06.145292 1207 log.go:172] (0xc0007b0420) (0xc000668140) Stream removed, broadcasting: 3\nI0503 11:47:06.145331 1207 log.go:172] (0xc0007b0420) (0xc000710140) Stream removed, broadcasting: 1\nI0503 11:47:06.145371 1207 log.go:172] (0xc0007b0420) Go away received\nI0503 11:47:06.145562 1207 log.go:172] (0xc0007b0420) (0xc000710140) Stream removed, broadcasting: 1\nI0503 11:47:06.145600 1207 log.go:172] (0xc0007b0420) (0xc000668140) Stream removed, broadcasting: 3\nI0503 11:47:06.145616 1207 log.go:172] (0xc0007b0420) (0xc0005cbc20) Stream removed, broadcasting: 5\nI0503 11:47:06.145633 1207 log.go:172] (0xc0007b0420) (0xc0006681e0) Stream removed, broadcasting: 7\n" May 3 11:47:06.168: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:47:08.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qs5mf" for this suite. May 3 11:47:14.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:47:14.286: INFO: namespace: e2e-tests-kubectl-qs5mf, resource: bindings, ignored listing per whitelist May 3 11:47:14.327: INFO: namespace e2e-tests-kubectl-qs5mf deletion completed in 6.150166288s • [SLOW TEST:11.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:47:14.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:47:14.444: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 3 11:47:14.466: INFO: Pod name sample-pod: Found 0 pods out of 1 May 3 11:47:19.470: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 3 11:47:19.470: INFO: Creating deployment "test-rolling-update-deployment" May 3 11:47:19.474: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 3 11:47:19.501: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 3 11:47:21.508: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 3 11:47:21.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724103239, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724103239, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724103239, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724103239, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 3 11:47:23.515: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 3 11:47:23.524: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-8lvrd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8lvrd/deployments/test-rolling-update-deployment,UID:d7fc8703-8d33-11ea-99e8-0242ac110002,ResourceVersion:8527966,Generation:1,CreationTimestamp:2020-05-03 11:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-03 11:47:19 +0000 UTC 2020-05-03 11:47:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-03 11:47:23 +0000 UTC 2020-05-03 11:47:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 3 11:47:23.527: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-8lvrd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8lvrd/replicasets/test-rolling-update-deployment-75db98fb4c,UID:d801a77d-8d33-11ea-99e8-0242ac110002,ResourceVersion:8527956,Generation:1,CreationTimestamp:2020-05-03 11:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d7fc8703-8d33-11ea-99e8-0242ac110002 0xc0024a2e97 0xc0024a2e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 3 11:47:23.527: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 3 11:47:23.528: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-8lvrd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8lvrd/replicasets/test-rolling-update-controller,UID:d4fd8943-8d33-11ea-99e8-0242ac110002,ResourceVersion:8527964,Generation:2,CreationTimestamp:2020-05-03 11:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d7fc8703-8d33-11ea-99e8-0242ac110002 0xc0024a2cdf 0xc0024a2cf0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 3 11:47:23.531: INFO: Pod "test-rolling-update-deployment-75db98fb4c-89jl6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-89jl6,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-8lvrd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8lvrd/pods/test-rolling-update-deployment-75db98fb4c-89jl6,UID:d802b9f1-8d33-11ea-99e8-0242ac110002,ResourceVersion:8527955,Generation:0,CreationTimestamp:2020-05-03 11:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c d801a77d-8d33-11ea-99e8-0242ac110002 0xc0024a3b67 0xc0024a3b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hkk97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hkk97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hkk97 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024a3be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024a3c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:47:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:47:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:47:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 11:47:19 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.72,StartTime:2020-05-03 11:47:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-03 11:47:22 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://62fd4c5731249abb5b799a1eb85f93a207eed23343efc03a43f791163a247e7a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:47:23.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-8lvrd" for this suite. May 3 11:47:31.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:47:31.749: INFO: namespace: e2e-tests-deployment-8lvrd, resource: bindings, ignored listing per whitelist May 3 11:47:31.768: INFO: namespace e2e-tests-deployment-8lvrd deletion completed in 8.233549342s • [SLOW TEST:17.440 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:47:31.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-74bt4 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-74bt4 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-74bt4 May 3 11:47:31.939: INFO: Found 0 stateful pods, waiting for 1 May 3 11:47:41.943: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 3 11:47:41.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74bt4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 11:47:42.192: INFO: stderr: "I0503 11:47:42.073432 1234 log.go:172] (0xc000138630) (0xc000714780) Create stream\nI0503 11:47:42.073539 1234 log.go:172] (0xc000138630) (0xc000714780) Stream added, broadcasting: 1\nI0503 11:47:42.076658 1234 log.go:172] (0xc000138630) Reply frame received for 1\nI0503 11:47:42.076704 1234 log.go:172] (0xc000138630) (0xc0006c0500) Create stream\nI0503 11:47:42.076726 1234 log.go:172] (0xc000138630) (0xc0006c0500) Stream added, broadcasting: 3\nI0503 11:47:42.078210 1234 log.go:172] (0xc000138630) Reply frame received for 3\nI0503 11:47:42.078253 1234 log.go:172] (0xc000138630) (0xc0006c05a0) Create stream\nI0503 11:47:42.078262 1234 log.go:172] (0xc000138630) (0xc0006c05a0) Stream added, broadcasting: 5\nI0503 11:47:42.079192 1234 log.go:172] (0xc000138630) Reply frame received for 5\nI0503 11:47:42.186468 1234 log.go:172] (0xc000138630) Data frame received for 5\nI0503 11:47:42.186510 1234 log.go:172] (0xc000138630) Data frame received for 3\nI0503 11:47:42.186527 1234 log.go:172] (0xc0006c0500) (3) Data frame handling\nI0503 11:47:42.186541 1234 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0503 11:47:42.186592 1234 log.go:172] (0xc0006c0500) (3) Data frame sent\nI0503 11:47:42.186731 1234 log.go:172] (0xc000138630) Data frame received for 3\nI0503 11:47:42.186755 1234 log.go:172] (0xc0006c0500) (3) Data frame handling\nI0503 11:47:42.188871 1234 log.go:172] (0xc000138630) Data frame received for 1\nI0503 11:47:42.188894 1234 log.go:172] (0xc000714780) (1) Data frame handling\nI0503 11:47:42.188919 1234 log.go:172] (0xc000714780) (1) Data frame sent\nI0503 11:47:42.188947 1234 log.go:172] (0xc000138630) (0xc000714780) Stream removed, broadcasting: 1\nI0503 11:47:42.189065 1234 log.go:172] (0xc000138630) Go away received\nI0503 11:47:42.189379 1234 log.go:172] (0xc000138630) (0xc000714780) Stream removed, broadcasting: 1\nI0503 11:47:42.189402 1234 log.go:172] (0xc000138630) (0xc0006c0500) Stream removed, broadcasting: 3\nI0503 11:47:42.189413 1234 log.go:172] (0xc000138630) (0xc0006c05a0) Stream removed, broadcasting: 5\n" May 3 11:47:42.193: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 11:47:42.193: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 3 11:47:42.197: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 3 11:47:52.202: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 3 11:47:52.202: INFO: Waiting for statefulset status.replicas updated to 0 May 3 11:47:52.234: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999522s May 3 11:47:53.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.979492478s May 3 11:47:54.243: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975594997s May 3 11:47:55.247: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.970851812s May 3 11:47:56.252: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.966172545s May 3 11:47:57.332: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.96206554s May 3 11:47:58.336: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.881961092s May 3 11:47:59.340: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.877265889s May 3 11:48:00.345: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.873609308s May 3 11:48:01.350: INFO: Verifying statefulset ss doesn't scale past 1 for another 868.700623ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-74bt4 May 3 11:48:02.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74bt4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 3 11:48:02.561: INFO: stderr: "I0503 11:48:02.478808 1258 log.go:172] (0xc00014c840) (0xc0007c2640) Create stream\nI0503 11:48:02.478885 1258 log.go:172] (0xc00014c840) (0xc0007c2640) Stream added, broadcasting: 1\nI0503 11:48:02.480440 1258 log.go:172] (0xc00014c840) Reply frame received for 1\nI0503 11:48:02.480489 1258 log.go:172] (0xc00014c840) (0xc0006acdc0) Create stream\nI0503 11:48:02.480503 1258 log.go:172] (0xc00014c840) (0xc0006acdc0) Stream added, broadcasting: 3\nI0503 11:48:02.481509 1258 log.go:172] (0xc00014c840) Reply frame received for 3\nI0503 11:48:02.481543 1258 log.go:172] (0xc00014c840) (0xc0007c26e0) Create stream\nI0503 11:48:02.481552 1258 log.go:172] (0xc00014c840) (0xc0007c26e0) Stream added, broadcasting: 5\nI0503 11:48:02.482307 1258 log.go:172] (0xc00014c840) Reply frame received for 5\nI0503 11:48:02.556276 1258 log.go:172] (0xc00014c840) Data frame received for 3\nI0503 11:48:02.556309 1258 log.go:172] (0xc0006acdc0) (3) Data frame handling\nI0503 11:48:02.556322 1258 log.go:172] (0xc0006acdc0) (3) Data frame sent\nI0503 11:48:02.556331 1258 log.go:172] (0xc00014c840) Data frame received for 3\nI0503 11:48:02.556341 1258 log.go:172] (0xc00014c840) Data frame received for 5\nI0503 11:48:02.556360 1258 log.go:172] (0xc0006acdc0) (3) Data frame handling\nI0503 11:48:02.556373 1258 log.go:172] (0xc0007c26e0) (5) Data frame handling\nI0503 11:48:02.558049 1258 log.go:172] (0xc00014c840) Data frame received for 1\nI0503 11:48:02.558074 1258 log.go:172] (0xc0007c2640) (1) Data frame handling\nI0503 11:48:02.558103 1258 log.go:172] (0xc0007c2640) (1) Data frame sent\nI0503 11:48:02.558126 1258 log.go:172] (0xc00014c840) (0xc0007c2640) Stream removed, broadcasting: 1\nI0503 11:48:02.558270 1258 log.go:172] (0xc00014c840) Go away received\nI0503 11:48:02.558312 1258 log.go:172] (0xc00014c840) (0xc0007c2640) Stream removed, broadcasting: 1\nI0503 11:48:02.558331 1258 log.go:172] (0xc00014c840) (0xc0006acdc0) Stream removed, broadcasting: 3\nI0503 11:48:02.558355 1258 log.go:172] (0xc00014c840) (0xc0007c26e0) Stream removed, broadcasting: 5\n" May 3 11:48:02.561: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 3 11:48:02.562: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 3 11:48:02.564: INFO: Found 1 stateful pods, waiting for 3 May 3 11:48:12.569: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 3 11:48:12.569: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 3 11:48:12.569: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 3 11:48:12.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74bt4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 11:48:12.851: INFO: stderr: "I0503 11:48:12.751047 1280 log.go:172] (0xc000138790) (0xc000736640) Create stream\nI0503 11:48:12.751139 1280 log.go:172] (0xc000138790) (0xc000736640) Stream added, broadcasting: 1\nI0503 11:48:12.754373 1280 log.go:172] (0xc000138790) Reply frame received for 1\nI0503 11:48:12.754437 1280 log.go:172] (0xc000138790) (0xc0007366e0) Create stream\nI0503 11:48:12.754450 1280 log.go:172] (0xc000138790) (0xc0007366e0) Stream added, broadcasting: 3\nI0503 11:48:12.755310 1280 log.go:172] (0xc000138790) Reply frame received for 3\nI0503 11:48:12.755338 1280 log.go:172] (0xc000138790) (0xc00059ec80) Create stream\nI0503 11:48:12.755349 1280 log.go:172] (0xc000138790) (0xc00059ec80) Stream added, broadcasting: 5\nI0503 11:48:12.756260 1280 log.go:172] (0xc000138790) Reply frame received for 5\nI0503 11:48:12.847121 1280 log.go:172] (0xc000138790) Data frame received for 3\nI0503 11:48:12.847149 1280 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0503 11:48:12.847157 1280 log.go:172] (0xc0007366e0) (3) Data frame sent\nI0503 11:48:12.847163 1280 log.go:172] (0xc000138790) Data frame received for 3\nI0503 11:48:12.847167 1280 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0503 11:48:12.847200 1280 log.go:172] (0xc000138790) Data frame received for 5\nI0503 11:48:12.847210 1280 log.go:172] (0xc00059ec80) (5) Data frame handling\nI0503 11:48:12.848057 1280 log.go:172] (0xc000138790) Data frame received for 1\nI0503 11:48:12.848071 1280 log.go:172] (0xc000736640) (1) Data frame handling\nI0503 11:48:12.848080 1280 log.go:172] (0xc000736640) (1) Data frame sent\nI0503 11:48:12.848094 1280 log.go:172] (0xc000138790) (0xc000736640) Stream removed, broadcasting: 1\nI0503 11:48:12.848111 1280 log.go:172] (0xc000138790) Go away received\nI0503 11:48:12.848302 1280 log.go:172] (0xc000138790) (0xc000736640) Stream removed, broadcasting: 1\nI0503 11:48:12.848317 1280 log.go:172] (0xc000138790) (0xc0007366e0) Stream removed, broadcasting: 3\nI0503 11:48:12.848325 1280 log.go:172] (0xc000138790) (0xc00059ec80) Stream removed, broadcasting: 5\n" May 3 11:48:12.851: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 11:48:12.851: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 3 11:48:12.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74bt4 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 11:48:13.083: INFO: stderr: "I0503 11:48:12.967679 1302 log.go:172] (0xc0007a6160) (0xc00070e640) Create stream\nI0503 11:48:12.967737 1302 log.go:172] (0xc0007a6160) (0xc00070e640) Stream added, broadcasting: 1\nI0503 11:48:12.969746 1302 log.go:172] (0xc0007a6160) Reply frame received for 1\nI0503 11:48:12.969787 1302 log.go:172] (0xc0007a6160) (0xc000204c80) Create stream\nI0503 11:48:12.969797 1302 log.go:172] (0xc0007a6160) (0xc000204c80) Stream added, broadcasting: 3\nI0503 11:48:12.970533 1302 log.go:172] (0xc0007a6160) Reply frame received for 3\nI0503 11:48:12.970566 1302 log.go:172] (0xc0007a6160) (0xc00043a000) Create stream\nI0503 11:48:12.970577 1302 log.go:172] (0xc0007a6160) (0xc00043a000) Stream added, broadcasting: 5\nI0503 11:48:12.971281 1302 log.go:172] (0xc0007a6160) Reply frame received for 5\nI0503 11:48:13.076989 1302 log.go:172] (0xc0007a6160) Data frame received for 3\nI0503 11:48:13.077040 1302 log.go:172] (0xc000204c80) (3) Data frame handling\nI0503 11:48:13.077065 1302 log.go:172] (0xc000204c80) (3) Data frame sent\nI0503 11:48:13.077085 1302 log.go:172] (0xc0007a6160) Data frame received for 3\nI0503 11:48:13.077096 1302 log.go:172] (0xc000204c80) (3) Data frame handling\nI0503 11:48:13.077343 1302 log.go:172] (0xc0007a6160) Data frame received for 5\nI0503 11:48:13.077380 1302 log.go:172] (0xc00043a000) (5) Data frame handling\nI0503 11:48:13.078997 1302 log.go:172] (0xc0007a6160) Data frame received for 1\nI0503 11:48:13.079025 1302 log.go:172] (0xc00070e640) (1) Data frame handling\nI0503 11:48:13.079046 1302 log.go:172] (0xc00070e640) (1) Data frame sent\nI0503 11:48:13.079187 1302 log.go:172] (0xc0007a6160) (0xc00070e640) Stream removed, broadcasting: 1\nI0503 11:48:13.079244 1302 log.go:172] (0xc0007a6160) Go away received\nI0503 11:48:13.079427 1302 log.go:172] (0xc0007a6160) (0xc00070e640) Stream removed, broadcasting: 1\nI0503 11:48:13.079441 1302 log.go:172] (0xc0007a6160) (0xc000204c80) Stream removed, broadcasting: 3\nI0503 11:48:13.079449 1302 log.go:172] (0xc0007a6160) (0xc00043a000) Stream removed, broadcasting: 5\n" May 3 11:48:13.083: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 11:48:13.083: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 3 11:48:13.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74bt4 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 11:48:13.498: INFO: stderr: "I0503 11:48:13.362372 1324 log.go:172] (0xc000138840) (0xc0001252c0) Create stream\nI0503 11:48:13.362451 1324 log.go:172] (0xc000138840) (0xc0001252c0) Stream added, broadcasting: 1\nI0503 11:48:13.364914 1324 log.go:172] (0xc000138840) Reply frame received for 1\nI0503 11:48:13.364969 1324 log.go:172] (0xc000138840) (0xc0003a2000) Create stream\nI0503 11:48:13.364987 1324 log.go:172] (0xc000138840) (0xc0003a2000) Stream added, broadcasting: 3\nI0503 11:48:13.366229 1324 log.go:172] (0xc000138840) Reply frame received for 3\nI0503 11:48:13.366279 1324 log.go:172] (0xc000138840) (0xc000125360) Create stream\nI0503 11:48:13.366292 1324 log.go:172] (0xc000138840) (0xc000125360) Stream added, broadcasting: 5\nI0503 11:48:13.367140 1324 log.go:172] (0xc000138840) Reply frame received for 5\nI0503 11:48:13.493605 1324 log.go:172] (0xc000138840) Data frame received for 5\nI0503 11:48:13.493637 1324 log.go:172] (0xc000125360) (5) Data frame handling\nI0503 11:48:13.493665 1324 log.go:172] (0xc000138840) Data frame received for 3\nI0503 11:48:13.493685 1324 log.go:172] (0xc0003a2000) (3) Data frame handling\nI0503 11:48:13.493706 1324 log.go:172] (0xc0003a2000) (3) Data frame sent\nI0503 11:48:13.493717 1324 log.go:172] (0xc000138840) Data frame received for 3\nI0503 11:48:13.493725 1324 log.go:172] (0xc0003a2000) (3) Data frame handling\nI0503 11:48:13.495374 1324 log.go:172] (0xc000138840) Data frame received for 1\nI0503 11:48:13.495409 1324 log.go:172] (0xc0001252c0) (1) Data frame handling\nI0503 11:48:13.495432 1324 log.go:172] (0xc0001252c0) (1) Data frame sent\nI0503 11:48:13.495460 1324 log.go:172] (0xc000138840) (0xc0001252c0) Stream removed, broadcasting: 1\nI0503 11:48:13.495489 1324 log.go:172] (0xc000138840) Go away received\nI0503 11:48:13.495684 1324 log.go:172] (0xc000138840) (0xc0001252c0) Stream removed, broadcasting: 1\nI0503 11:48:13.495717 1324 log.go:172] (0xc000138840) (0xc0003a2000) Stream removed, broadcasting: 3\nI0503 11:48:13.495740 1324 log.go:172] (0xc000138840) (0xc000125360) Stream removed, broadcasting: 5\n" May 3 11:48:13.498: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 11:48:13.498: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 3 11:48:13.498: INFO: Waiting for statefulset status.replicas updated to 0 May 3 11:48:13.501: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 3 11:48:23.511: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 3 11:48:23.511: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 3 11:48:23.511: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 3 11:48:23.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999799s May 3 11:48:24.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976212641s May 3 11:48:25.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970736664s May 3 11:48:26.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.965632544s May 3 11:48:27.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959177836s May 3 11:48:28.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954189159s May 3 11:48:29.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.950063117s May 3 11:48:30.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.94550693s May 3 11:48:31.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.940416085s May 3 11:48:32.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.545815ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-74bt4 May 3 11:48:33.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74bt4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 3 11:48:33.815: INFO: stderr: "I0503 11:48:33.730766 1347 log.go:172] (0xc0003e0420) (0xc000681540) Create stream\nI0503 11:48:33.730818 1347 log.go:172] (0xc0003e0420) (0xc000681540) Stream added, broadcasting: 1\nI0503 11:48:33.733902 1347 log.go:172] (0xc0003e0420) Reply frame received for 1\nI0503 11:48:33.733974 1347 log.go:172] (0xc0003e0420) (0xc0003d2140) Create stream\nI0503 11:48:33.734004 1347 log.go:172] (0xc0003e0420) (0xc0003d2140) Stream added, broadcasting: 3\nI0503 11:48:33.735345 1347 log.go:172] (0xc0003e0420) Reply frame received for 3\nI0503 11:48:33.735381 1347 log.go:172] (0xc0003e0420) (0xc0003d21e0) Create stream\nI0503 11:48:33.735391 1347 log.go:172] (0xc0003e0420) (0xc0003d21e0) Stream added, broadcasting: 5\nI0503 11:48:33.736216 1347 log.go:172] (0xc0003e0420) Reply frame received for 5\nI0503 11:48:33.809713 1347 log.go:172] (0xc0003e0420) Data frame received for 5\nI0503 11:48:33.809754 1347 log.go:172] (0xc0003d21e0) (5) Data frame handling\nI0503 11:48:33.809815 1347 log.go:172] (0xc0003e0420) Data frame received for 3\nI0503 11:48:33.809840 1347 log.go:172] (0xc0003d2140) (3) Data frame handling\nI0503 11:48:33.809872 1347 log.go:172] (0xc0003d2140) (3) Data frame sent\nI0503 11:48:33.809900 1347 log.go:172] (0xc0003e0420) Data frame received for 3\nI0503 11:48:33.809916 1347 log.go:172] (0xc0003d2140) (3) Data frame handling\nI0503 11:48:33.810871 1347 log.go:172] (0xc0003e0420) Data frame received for 1\nI0503 11:48:33.810908 1347 log.go:172] (0xc000681540) (1) Data frame handling\nI0503 11:48:33.810944 1347 log.go:172] (0xc000681540) (1) Data frame sent\nI0503 11:48:33.810974 1347 log.go:172] (0xc0003e0420) (0xc000681540) Stream removed, broadcasting: 1\nI0503 11:48:33.811003 1347 log.go:172] (0xc0003e0420) Go away received\nI0503 11:48:33.811309 1347 log.go:172] (0xc0003e0420) (0xc000681540) Stream removed, broadcasting: 1\nI0503 11:48:33.811337 1347 log.go:172] (0xc0003e0420) (0xc0003d2140) Stream removed, broadcasting: 3\nI0503 11:48:33.811353 1347 log.go:172] (0xc0003e0420) (0xc0003d21e0) Stream removed, broadcasting: 5\n" May 3 11:48:33.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 3 11:48:33.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 3 11:48:33.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74bt4 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 3 11:48:34.005: INFO: stderr: "I0503 11:48:33.942182 1370 log.go:172] (0xc0008682c0) (0xc000738640) Create stream\nI0503 11:48:33.942235 1370 log.go:172] (0xc0008682c0) (0xc000738640) Stream added, broadcasting: 1\nI0503 11:48:33.944410 1370 log.go:172] (0xc0008682c0) Reply frame received for 1\nI0503 11:48:33.944465 1370 log.go:172] (0xc0008682c0) (0xc000672e60) Create stream\nI0503 11:48:33.944488 1370 log.go:172] (0xc0008682c0) (0xc000672e60) Stream added, broadcasting: 3\nI0503 11:48:33.945391 1370 log.go:172] (0xc0008682c0) Reply frame received for 3\nI0503 11:48:33.945418 1370 log.go:172] (0xc0008682c0) (0xc0007386e0) Create stream\nI0503 11:48:33.945436 1370 log.go:172] (0xc0008682c0) (0xc0007386e0) Stream added, broadcasting: 5\nI0503 11:48:33.946098 1370 log.go:172] (0xc0008682c0) Reply frame received for 5\nI0503 11:48:34.001243 1370 log.go:172] (0xc0008682c0) Data frame received for 5\nI0503 11:48:34.001289 1370 log.go:172] (0xc0007386e0) (5) Data frame handling\nI0503 11:48:34.001304 1370 log.go:172] (0xc0008682c0) Data frame received for 3\nI0503 11:48:34.001308 1370 log.go:172] (0xc000672e60) (3) Data frame handling\nI0503 11:48:34.001313 1370 log.go:172] (0xc000672e60) (3) Data frame sent\nI0503 11:48:34.001318 1370 log.go:172] (0xc0008682c0) Data frame received for 3\nI0503 11:48:34.001322 1370 log.go:172] (0xc000672e60) (3) Data frame handling\nI0503 11:48:34.002519 1370 log.go:172] (0xc0008682c0) Data frame received for 1\nI0503 11:48:34.002537 1370 log.go:172] (0xc000738640) (1) Data frame handling\nI0503 11:48:34.002548 1370 log.go:172] (0xc000738640) (1) Data frame sent\nI0503 11:48:34.002559 1370 log.go:172] (0xc0008682c0) (0xc000738640) Stream removed, broadcasting: 1\nI0503 11:48:34.002577 1370 log.go:172] (0xc0008682c0) Go away received\nI0503 11:48:34.002862 1370 log.go:172] (0xc0008682c0) (0xc000738640) Stream removed, broadcasting: 1\nI0503 11:48:34.002879 1370 log.go:172] (0xc0008682c0) (0xc000672e60) Stream removed, broadcasting: 3\nI0503 11:48:34.002887 1370 log.go:172] (0xc0008682c0) (0xc0007386e0) Stream removed, broadcasting: 5\n" May 3 11:48:34.005: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 3 11:48:34.005: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 3 11:48:34.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74bt4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 3 11:48:34.292: INFO: stderr: "I0503 11:48:34.127167 1392 log.go:172] (0xc000764160) (0xc0005f2640) Create stream\nI0503 11:48:34.127209 1392 log.go:172] (0xc000764160) (0xc0005f2640) Stream added, broadcasting: 1\nI0503 11:48:34.137633 1392 log.go:172] (0xc000764160) Reply frame received for 1\nI0503 11:48:34.137827 1392 log.go:172] (0xc000764160) (0xc00041ad20) Create stream\nI0503 11:48:34.137950 1392 log.go:172] (0xc000764160) (0xc00041ad20) Stream added, broadcasting: 3\nI0503 11:48:34.140279 1392 log.go:172] (0xc000764160) Reply frame received for 3\nI0503 11:48:34.140327 1392 log.go:172] (0xc000764160) (0xc000126000) Create stream\nI0503 11:48:34.140336 1392 log.go:172] (0xc000764160) (0xc000126000) Stream added, broadcasting: 5\nI0503 11:48:34.140990 1392 log.go:172] (0xc000764160) Reply frame received for 5\nI0503 11:48:34.284138 1392 log.go:172] (0xc000764160) Data frame received for 3\nI0503 11:48:34.284193 1392 log.go:172] (0xc00041ad20) (3) Data frame handling\nI0503 11:48:34.284226 1392 log.go:172] (0xc00041ad20) (3) Data frame sent\nI0503 11:48:34.284391 1392 log.go:172] (0xc000764160) Data frame received for 5\nI0503 11:48:34.284423 1392 log.go:172] (0xc000126000) (5) Data frame handling\nI0503 11:48:34.284624 1392 log.go:172] (0xc000764160) Data frame received for 3\nI0503 11:48:34.284642 1392 log.go:172] (0xc00041ad20) (3) Data frame handling\nI0503 11:48:34.288562 1392 log.go:172] (0xc000764160) Data frame received for 1\nI0503 11:48:34.288581 1392 log.go:172] (0xc0005f2640) (1) Data frame handling\nI0503 11:48:34.288593 1392 log.go:172] (0xc0005f2640) (1) Data frame sent\nI0503 11:48:34.288613 1392 log.go:172] (0xc000764160) (0xc0005f2640) Stream removed, broadcasting: 1\nI0503 11:48:34.288637 1392 log.go:172] (0xc000764160) Go away received\nI0503 11:48:34.288866 1392 log.go:172] (0xc000764160) (0xc0005f2640) Stream removed, broadcasting: 1\nI0503 11:48:34.288890 1392 log.go:172] (0xc000764160) (0xc00041ad20) Stream removed, broadcasting: 3\nI0503 11:48:34.288902 1392 log.go:172] (0xc000764160) (0xc000126000) Stream removed, broadcasting: 5\n" May 3 11:48:34.292: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 3 11:48:34.292: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 3 11:48:34.292: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 3 11:48:54.589: INFO: Deleting all statefulset in ns e2e-tests-statefulset-74bt4 May 3 11:48:54.592: INFO: Scaling statefulset ss to 0 May 3 11:48:54.602: INFO: Waiting for statefulset status.replicas updated to 0 May 3 11:48:54.605: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:48:54.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-74bt4" for this suite. May 3 11:49:02.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:49:02.737: INFO: namespace: e2e-tests-statefulset-74bt4, resource: bindings, ignored listing per whitelist May 3 11:49:02.739: INFO: namespace e2e-tests-statefulset-74bt4 deletion completed in 8.109190219s • [SLOW TEST:90.971 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:49:02.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 3 11:49:02.914: INFO: Waiting up to 5m0s for pod "client-containers-15a260ee-8d34-11ea-b78d-0242ac110017" in namespace "e2e-tests-containers-9tz5w" to be "success or failure" May 3 11:49:02.942: INFO: Pod "client-containers-15a260ee-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.287564ms May 3 11:49:04.945: INFO: Pod "client-containers-15a260ee-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031216989s May 3 11:49:06.948: INFO: Pod "client-containers-15a260ee-8d34-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034076837s STEP: Saw pod success May 3 11:49:06.948: INFO: Pod "client-containers-15a260ee-8d34-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:49:06.950: INFO: Trying to get logs from node hunter-worker pod client-containers-15a260ee-8d34-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 11:49:07.002: INFO: Waiting for pod client-containers-15a260ee-8d34-11ea-b78d-0242ac110017 to disappear May 3 11:49:07.080: INFO: Pod client-containers-15a260ee-8d34-11ea-b78d-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:49:07.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-9tz5w" for this suite. May 3 11:49:13.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:49:13.278: INFO: namespace: e2e-tests-containers-9tz5w, resource: bindings, ignored listing per whitelist May 3 11:49:13.322: INFO: namespace e2e-tests-containers-9tz5w deletion completed in 6.237012281s • [SLOW TEST:10.582 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:49:13.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 11:49:13.723: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c1500e9-8d34-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-ssjw2" to be "success or failure" May 3 11:49:13.742: INFO: Pod "downwardapi-volume-1c1500e9-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.023385ms May 3 11:49:15.746: INFO: Pod "downwardapi-volume-1c1500e9-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023307162s May 3 11:49:17.751: INFO: Pod "downwardapi-volume-1c1500e9-8d34-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028043066s STEP: Saw pod success May 3 11:49:17.751: INFO: Pod "downwardapi-volume-1c1500e9-8d34-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:49:17.754: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1c1500e9-8d34-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 11:49:17.779: INFO: Waiting for pod downwardapi-volume-1c1500e9-8d34-11ea-b78d-0242ac110017 to disappear May 3 11:49:17.784: INFO: Pod downwardapi-volume-1c1500e9-8d34-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:49:17.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ssjw2" for this suite. May 3 11:49:23.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:49:23.870: INFO: namespace: e2e-tests-projected-ssjw2, resource: bindings, ignored listing per whitelist May 3 11:49:23.941: INFO: namespace e2e-tests-projected-ssjw2 deletion completed in 6.154021078s • [SLOW TEST:10.619 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:49:23.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-45lt7 STEP: creating a selector STEP: Creating the service pods in kubernetes May 3 11:49:24.002: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 3 11:49:44.160: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.75:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-45lt7 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 11:49:44.160: INFO: >>> kubeConfig: /root/.kube/config I0503 11:49:44.193977 6 log.go:172] (0xc000da0580) (0xc002292aa0) Create stream I0503 11:49:44.194007 6 log.go:172] (0xc000da0580) (0xc002292aa0) Stream added, broadcasting: 1 I0503 11:49:44.195542 6 log.go:172] (0xc000da0580) Reply frame received for 1 I0503 11:49:44.195574 6 log.go:172] (0xc000da0580) (0xc002292be0) Create stream I0503 11:49:44.195589 6 log.go:172] (0xc000da0580) (0xc002292be0) Stream added, broadcasting: 3 I0503 11:49:44.196542 6 log.go:172] (0xc000da0580) Reply frame received for 3 I0503 11:49:44.196565 6 log.go:172] (0xc000da0580) (0xc0025c10e0) Create stream I0503 11:49:44.196576 6 log.go:172] (0xc000da0580) (0xc0025c10e0) Stream added, broadcasting: 5 I0503 11:49:44.197774 6 log.go:172] (0xc000da0580) Reply frame received for 5 I0503 11:49:44.279806 6 log.go:172] (0xc000da0580) Data frame received for 5 I0503 11:49:44.279848 6 log.go:172] (0xc0025c10e0) (5) Data frame handling I0503 11:49:44.279895 6 log.go:172] (0xc000da0580) Data frame received for 3 I0503 11:49:44.279914 6 log.go:172] (0xc002292be0) (3) Data frame handling I0503 11:49:44.279929 6 log.go:172] (0xc002292be0) (3) Data frame sent I0503 11:49:44.279950 6 log.go:172] (0xc000da0580) Data frame received for 3 I0503 11:49:44.279961 6 log.go:172] (0xc002292be0) (3) Data frame handling I0503 11:49:44.281698 6 log.go:172] (0xc000da0580) Data frame received for 1 I0503 11:49:44.281725 6 log.go:172] (0xc002292aa0) (1) Data frame handling I0503 11:49:44.281737 6 log.go:172] (0xc002292aa0) (1) Data frame sent I0503 11:49:44.281748 6 log.go:172] (0xc000da0580) (0xc002292aa0) Stream removed, broadcasting: 1 I0503 11:49:44.281762 6 log.go:172] (0xc000da0580) Go away received I0503 11:49:44.281965 6 log.go:172] (0xc000da0580) (0xc002292aa0) Stream removed, broadcasting: 1 I0503 11:49:44.281989 6 log.go:172] (0xc000da0580) (0xc002292be0) Stream removed, broadcasting: 3 I0503 11:49:44.282003 6 log.go:172] (0xc000da0580) (0xc0025c10e0) Stream removed, broadcasting: 5 May 3 11:49:44.282: INFO: Found all expected endpoints: [netserver-0] May 3 11:49:44.285: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.89:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-45lt7 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 11:49:44.285: INFO: >>> kubeConfig: /root/.kube/config I0503 11:49:44.314539 6 log.go:172] (0xc000b7c2c0) (0xc002593a40) Create stream I0503 11:49:44.314572 6 log.go:172] (0xc000b7c2c0) (0xc002593a40) Stream added, broadcasting: 1 I0503 11:49:44.316946 6 log.go:172] (0xc000b7c2c0) Reply frame received for 1 I0503 11:49:44.317033 6 log.go:172] (0xc000b7c2c0) (0xc00268f540) Create stream I0503 11:49:44.317062 6 log.go:172] (0xc000b7c2c0) (0xc00268f540) Stream added, broadcasting: 3 I0503 11:49:44.318341 6 log.go:172] (0xc000b7c2c0) Reply frame received for 3 I0503 11:49:44.318374 6 log.go:172] (0xc000b7c2c0) (0xc002292c80) Create stream I0503 11:49:44.318385 6 log.go:172] (0xc000b7c2c0) (0xc002292c80) Stream added, broadcasting: 5 I0503 11:49:44.319454 6 log.go:172] (0xc000b7c2c0) Reply frame received for 5 I0503 11:49:44.388616 6 log.go:172] (0xc000b7c2c0) Data frame received for 3 I0503 11:49:44.388656 6 log.go:172] (0xc00268f540) (3) Data frame handling I0503 11:49:44.388686 6 log.go:172] (0xc00268f540) (3) Data frame sent I0503 11:49:44.388721 6 log.go:172] (0xc000b7c2c0) Data frame received for 3 I0503 11:49:44.388794 6 log.go:172] (0xc00268f540) (3) Data frame handling I0503 11:49:44.389333 6 log.go:172] (0xc000b7c2c0) Data frame received for 5 I0503 11:49:44.389361 6 log.go:172] (0xc002292c80) (5) Data frame handling I0503 11:49:44.391675 6 log.go:172] (0xc000b7c2c0) Data frame received for 1 I0503 11:49:44.391712 6 log.go:172] (0xc002593a40) (1) Data frame handling I0503 11:49:44.391731 6 log.go:172] (0xc002593a40) (1) Data frame sent I0503 11:49:44.391743 6 log.go:172] (0xc000b7c2c0) (0xc002593a40) Stream removed, broadcasting: 1 I0503 11:49:44.391822 6 log.go:172] (0xc000b7c2c0) (0xc002593a40) Stream removed, broadcasting: 1 I0503 11:49:44.391835 6 log.go:172] (0xc000b7c2c0) (0xc00268f540) Stream removed, broadcasting: 3 I0503 11:49:44.392050 6 log.go:172] (0xc000b7c2c0) Go away received I0503 11:49:44.392095 6 log.go:172] (0xc000b7c2c0) (0xc002292c80) Stream removed, broadcasting: 5 May 3 11:49:44.392: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:49:44.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-45lt7" for this suite. May 3 11:50:08.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:50:08.452: INFO: namespace: e2e-tests-pod-network-test-45lt7, resource: bindings, ignored listing per whitelist May 3 11:50:08.583: INFO: namespace e2e-tests-pod-network-test-45lt7 deletion completed in 24.184482693s • [SLOW TEST:44.642 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:50:08.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3cd60270-8d34-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:50:08.729: INFO: Waiting up to 5m0s for pod "pod-configmaps-3cde2e72-8d34-11ea-b78d-0242ac110017" in namespace "e2e-tests-configmap-r68gr" to be "success or failure" May 3 11:50:08.740: INFO: Pod "pod-configmaps-3cde2e72-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.971732ms May 3 11:50:10.920: INFO: Pod "pod-configmaps-3cde2e72-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19116257s May 3 11:50:12.926: INFO: Pod "pod-configmaps-3cde2e72-8d34-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.196830963s STEP: Saw pod success May 3 11:50:12.926: INFO: Pod "pod-configmaps-3cde2e72-8d34-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:50:12.929: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-3cde2e72-8d34-11ea-b78d-0242ac110017 container configmap-volume-test: STEP: delete the pod May 3 11:50:13.011: INFO: Waiting for pod pod-configmaps-3cde2e72-8d34-11ea-b78d-0242ac110017 to disappear May 3 11:50:13.016: INFO: Pod pod-configmaps-3cde2e72-8d34-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:50:13.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-r68gr" for this suite. May 3 11:50:19.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:50:19.136: INFO: namespace: e2e-tests-configmap-r68gr, resource: bindings, ignored listing per whitelist May 3 11:50:19.136: INFO: namespace e2e-tests-configmap-r68gr deletion completed in 6.117679384s • [SLOW TEST:10.553 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:50:19.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 3 11:50:19.242: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 3 11:50:19.288: INFO: Waiting for terminating namespaces to be deleted... May 3 11:50:19.290: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 3 11:50:19.296: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 3 11:50:19.296: INFO: Container kube-proxy ready: true, restart count 0 May 3 11:50:19.296: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 3 11:50:19.296: INFO: Container kindnet-cni ready: true, restart count 0 May 3 11:50:19.296: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 3 11:50:19.296: INFO: Container coredns ready: true, restart count 0 May 3 11:50:19.296: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 3 11:50:19.302: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 3 11:50:19.302: INFO: Container kindnet-cni ready: true, restart count 0 May 3 11:50:19.302: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 3 11:50:19.302: INFO: Container coredns ready: true, restart count 0 May 3 11:50:19.302: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 3 11:50:19.302: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160b825a9fdec8e0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:50:20.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-xn47z" for this suite. May 3 11:50:26.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:50:26.403: INFO: namespace: e2e-tests-sched-pred-xn47z, resource: bindings, ignored listing per whitelist May 3 11:50:26.449: INFO: namespace e2e-tests-sched-pred-xn47z deletion completed in 6.09320631s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.312 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:50:26.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 3 11:50:26.577: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-n4rgf,SelfLink:/api/v1/namespaces/e2e-tests-watch-n4rgf/configmaps/e2e-watch-test-watch-closed,UID:477b79cf-8d34-11ea-99e8-0242ac110002,ResourceVersion:8528694,Generation:0,CreationTimestamp:2020-05-03 11:50:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 3 11:50:26.577: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-n4rgf,SelfLink:/api/v1/namespaces/e2e-tests-watch-n4rgf/configmaps/e2e-watch-test-watch-closed,UID:477b79cf-8d34-11ea-99e8-0242ac110002,ResourceVersion:8528695,Generation:0,CreationTimestamp:2020-05-03 11:50:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 3 11:50:26.597: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-n4rgf,SelfLink:/api/v1/namespaces/e2e-tests-watch-n4rgf/configmaps/e2e-watch-test-watch-closed,UID:477b79cf-8d34-11ea-99e8-0242ac110002,ResourceVersion:8528696,Generation:0,CreationTimestamp:2020-05-03 11:50:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 3 11:50:26.597: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-n4rgf,SelfLink:/api/v1/namespaces/e2e-tests-watch-n4rgf/configmaps/e2e-watch-test-watch-closed,UID:477b79cf-8d34-11ea-99e8-0242ac110002,ResourceVersion:8528697,Generation:0,CreationTimestamp:2020-05-03 11:50:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:50:26.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-n4rgf" for this suite. May 3 11:50:32.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:50:32.655: INFO: namespace: e2e-tests-watch-n4rgf, resource: bindings, ignored listing per whitelist May 3 11:50:32.708: INFO: namespace e2e-tests-watch-n4rgf deletion completed in 6.107982055s • [SLOW TEST:6.259 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:50:32.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-4b3917f1-8d34-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:50:32.827: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b3af282-8d34-11ea-b78d-0242ac110017" in namespace "e2e-tests-configmap-7kxmt" to be "success or failure" May 3 11:50:32.848: INFO: Pod "pod-configmaps-4b3af282-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 20.67699ms May 3 11:50:34.851: INFO: Pod "pod-configmaps-4b3af282-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023927115s May 3 11:50:36.855: INFO: Pod "pod-configmaps-4b3af282-8d34-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028283613s STEP: Saw pod success May 3 11:50:36.855: INFO: Pod "pod-configmaps-4b3af282-8d34-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:50:36.858: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-4b3af282-8d34-11ea-b78d-0242ac110017 container configmap-volume-test: STEP: delete the pod May 3 11:50:36.960: INFO: Waiting for pod pod-configmaps-4b3af282-8d34-11ea-b78d-0242ac110017 to disappear May 3 11:50:37.010: INFO: Pod pod-configmaps-4b3af282-8d34-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:50:37.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7kxmt" for this suite. May 3 11:50:43.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:50:43.058: INFO: namespace: e2e-tests-configmap-7kxmt, resource: bindings, ignored listing per whitelist May 3 11:50:43.138: INFO: namespace e2e-tests-configmap-7kxmt deletion completed in 6.123497596s • [SLOW TEST:10.429 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:50:43.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 3 11:50:43.235: INFO: Waiting up to 5m0s for pod "downward-api-516dcdcc-8d34-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-rz5ts" to be "success or failure" May 3 11:50:43.280: INFO: Pod "downward-api-516dcdcc-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 44.707998ms May 3 11:50:45.330: INFO: Pod "downward-api-516dcdcc-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095376932s May 3 11:50:47.335: INFO: Pod "downward-api-516dcdcc-8d34-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099713841s STEP: Saw pod success May 3 11:50:47.335: INFO: Pod "downward-api-516dcdcc-8d34-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:50:47.338: INFO: Trying to get logs from node hunter-worker pod downward-api-516dcdcc-8d34-11ea-b78d-0242ac110017 container dapi-container: STEP: delete the pod May 3 11:50:47.383: INFO: Waiting for pod downward-api-516dcdcc-8d34-11ea-b78d-0242ac110017 to disappear May 3 11:50:47.404: INFO: Pod downward-api-516dcdcc-8d34-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:50:47.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rz5ts" for this suite. May 3 11:50:53.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:50:53.475: INFO: namespace: e2e-tests-downward-api-rz5ts, resource: bindings, ignored listing per whitelist May 3 11:50:53.507: INFO: namespace e2e-tests-downward-api-rz5ts deletion completed in 6.09896778s • [SLOW TEST:10.369 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:50:53.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 3 11:50:58.167: INFO: Successfully updated pod "annotationupdate579ef284-8d34-11ea-b78d-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:51:02.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-z8qp5" for this suite. May 3 11:51:24.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:51:24.242: INFO: namespace: e2e-tests-downward-api-z8qp5, resource: bindings, ignored listing per whitelist May 3 11:51:24.300: INFO: namespace e2e-tests-downward-api-z8qp5 deletion completed in 22.098565893s • [SLOW TEST:30.793 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:51:24.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 11:51:28.566: INFO: Waiting up to 5m0s for pod "client-envvars-6c712393-8d34-11ea-b78d-0242ac110017" in namespace "e2e-tests-pods-rblq8" to be "success or failure" May 3 11:51:28.623: INFO: Pod "client-envvars-6c712393-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 56.772851ms May 3 11:51:30.687: INFO: Pod "client-envvars-6c712393-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121448417s May 3 11:51:32.692: INFO: Pod "client-envvars-6c712393-8d34-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126484333s STEP: Saw pod success May 3 11:51:32.692: INFO: Pod "client-envvars-6c712393-8d34-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:51:32.696: INFO: Trying to get logs from node hunter-worker pod client-envvars-6c712393-8d34-11ea-b78d-0242ac110017 container env3cont: STEP: delete the pod May 3 11:51:32.755: INFO: Waiting for pod client-envvars-6c712393-8d34-11ea-b78d-0242ac110017 to disappear May 3 11:51:32.886: INFO: Pod client-envvars-6c712393-8d34-11ea-b78d-0242ac110017 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:51:32.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rblq8" for this suite. May 3 11:52:22.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:52:22.934: INFO: namespace: e2e-tests-pods-rblq8, resource: bindings, ignored listing per whitelist May 3 11:52:22.992: INFO: namespace e2e-tests-pods-rblq8 deletion completed in 50.101310803s • [SLOW TEST:58.692 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:52:22.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 3 11:52:23.146: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 3 11:52:23.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:23.454: INFO: stderr: "" May 3 11:52:23.454: INFO: stdout: "service/redis-slave created\n" May 3 11:52:23.454: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 3 11:52:23.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:23.815: INFO: stderr: "" May 3 11:52:23.815: INFO: stdout: "service/redis-master created\n" May 3 11:52:23.815: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 3 11:52:23.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:24.164: INFO: stderr: "" May 3 11:52:24.164: INFO: stdout: "service/frontend created\n" May 3 11:52:24.164: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 3 11:52:24.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:24.454: INFO: stderr: "" May 3 11:52:24.454: INFO: stdout: "deployment.extensions/frontend created\n" May 3 11:52:24.455: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 3 11:52:24.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:24.798: INFO: stderr: "" May 3 11:52:24.798: INFO: stdout: "deployment.extensions/redis-master created\n" May 3 11:52:24.798: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 3 11:52:24.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:25.102: INFO: stderr: "" May 3 11:52:25.102: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 3 11:52:25.102: INFO: Waiting for all frontend pods to be Running. May 3 11:52:35.152: INFO: Waiting for frontend to serve content. May 3 11:52:35.169: INFO: Trying to add a new entry to the guestbook. May 3 11:52:35.186: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 3 11:52:35.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:35.375: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 11:52:35.375: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 3 11:52:35.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:35.554: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 11:52:35.554: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 3 11:52:35.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:35.699: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 11:52:35.699: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 3 11:52:35.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:35.810: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 11:52:35.810: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 3 11:52:35.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:35.945: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 11:52:35.945: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 3 11:52:35.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrz5' May 3 11:52:36.256: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 11:52:36.256: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:52:36.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ffrz5" for this suite. May 3 11:53:16.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:53:16.638: INFO: namespace: e2e-tests-kubectl-ffrz5, resource: bindings, ignored listing per whitelist May 3 11:53:16.655: INFO: namespace e2e-tests-kubectl-ffrz5 deletion completed in 40.235687016s • [SLOW TEST:53.662 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:53:16.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-acf03b9a-8d34-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:53:16.780: INFO: Waiting up to 5m0s for pod "pod-configmaps-acf35613-8d34-11ea-b78d-0242ac110017" in namespace "e2e-tests-configmap-85d2q" to be "success or failure" May 3 11:53:16.792: INFO: Pod "pod-configmaps-acf35613-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.088729ms May 3 11:53:18.796: INFO: Pod "pod-configmaps-acf35613-8d34-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016478677s May 3 11:53:20.801: INFO: Pod "pod-configmaps-acf35613-8d34-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021066731s STEP: Saw pod success May 3 11:53:20.801: INFO: Pod "pod-configmaps-acf35613-8d34-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:53:20.804: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-acf35613-8d34-11ea-b78d-0242ac110017 container configmap-volume-test: STEP: delete the pod May 3 11:53:20.828: INFO: Waiting for pod pod-configmaps-acf35613-8d34-11ea-b78d-0242ac110017 to disappear May 3 11:53:20.868: INFO: Pod pod-configmaps-acf35613-8d34-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:53:20.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-85d2q" for this suite. May 3 11:53:26.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:53:26.960: INFO: namespace: e2e-tests-configmap-85d2q, resource: bindings, ignored listing per whitelist May 3 11:53:26.965: INFO: namespace e2e-tests-configmap-85d2q deletion completed in 6.075332977s • [SLOW TEST:10.309 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:53:26.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 3 11:53:27.119: INFO: Pod name pod-release: Found 0 pods out of 1 May 3 11:53:32.124: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:53:33.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-tp6df" for this suite. May 3 11:53:41.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:53:41.298: INFO: namespace: e2e-tests-replication-controller-tp6df, resource: bindings, ignored listing per whitelist May 3 11:53:41.332: INFO: namespace e2e-tests-replication-controller-tp6df deletion completed in 8.094054028s • [SLOW TEST:14.367 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:53:41.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-sk97p May 3 11:53:47.460: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-sk97p STEP: checking the pod's current state and verifying that restartCount is present May 3 11:53:47.463: INFO: Initial restart count of pod liveness-exec is 0 May 3 11:54:41.571: INFO: Restart count of pod e2e-tests-container-probe-sk97p/liveness-exec is now 1 (54.107944286s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:54:41.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-sk97p" for this suite. May 3 11:54:47.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:54:47.761: INFO: namespace: e2e-tests-container-probe-sk97p, resource: bindings, ignored listing per whitelist May 3 11:54:47.763: INFO: namespace e2e-tests-container-probe-sk97p deletion completed in 6.113204362s • [SLOW TEST:66.431 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:54:47.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-r8pvr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8pvr to expose endpoints map[] May 3 11:54:47.935: INFO: Get endpoints failed (38.614527ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 3 11:54:48.939: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8pvr exposes endpoints map[] (1.042193268s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-r8pvr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8pvr to expose endpoints map[pod1:[80]] May 3 11:54:52.981: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8pvr exposes endpoints map[pod1:[80]] (4.036135191s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-r8pvr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8pvr to expose endpoints map[pod1:[80] pod2:[80]] May 3 11:54:56.117: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8pvr exposes endpoints map[pod1:[80] pod2:[80]] (3.131626628s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-r8pvr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8pvr to expose endpoints map[pod2:[80]] May 3 11:54:56.132: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8pvr exposes endpoints map[pod2:[80]] (5.930764ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-r8pvr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8pvr to expose endpoints map[] May 3 11:54:57.221: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8pvr exposes endpoints map[] (1.085078131s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:54:57.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-r8pvr" for this suite. May 3 11:55:03.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:55:03.308: INFO: namespace: e2e-tests-services-r8pvr, resource: bindings, ignored listing per whitelist May 3 11:55:03.368: INFO: namespace e2e-tests-services-r8pvr deletion completed in 6.094517502s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:15.605 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:55:03.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-ec89750b-8d34-11ea-b78d-0242ac110017 STEP: Creating secret with name s-test-opt-upd-ec8975b7-8d34-11ea-b78d-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ec89750b-8d34-11ea-b78d-0242ac110017 STEP: Updating secret s-test-opt-upd-ec8975b7-8d34-11ea-b78d-0242ac110017 STEP: Creating secret with name s-test-opt-create-ec8975ec-8d34-11ea-b78d-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:55:11.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vrd7s" for this suite. May 3 11:55:33.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:55:33.643: INFO: namespace: e2e-tests-projected-vrd7s, resource: bindings, ignored listing per whitelist May 3 11:55:33.707: INFO: namespace e2e-tests-projected-vrd7s deletion completed in 22.101800175s • [SLOW TEST:30.339 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:55:33.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bsxbb in namespace e2e-tests-proxy-msphf I0503 11:55:33.944915 6 runners.go:184] Created replication controller with name: proxy-service-bsxbb, namespace: e2e-tests-proxy-msphf, replica count: 1 I0503 11:55:34.995316 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0503 11:55:35.995554 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0503 11:55:36.995765 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0503 11:55:37.995991 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0503 11:55:38.996173 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0503 11:55:39.996409 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0503 11:55:40.996681 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0503 11:55:41.996943 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0503 11:55:42.997429 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0503 11:55:43.997698 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0503 11:55:44.997954 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0503 11:55:45.998214 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0503 11:55:46.998427 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0503 11:55:47.998648 6 runners.go:184] proxy-service-bsxbb Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 3 11:55:48.002: INFO: setup took 14.163425138s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 3 11:55:48.009: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-msphf/pods/http:proxy-service-bsxbb-7dxpm:160/proxy/: foo (200; 7.230918ms) May 3 11:55:48.009: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-msphf/pods/proxy-service-bsxbb-7dxpm:160/proxy/: foo (200; 7.287281ms) May 3 11:55:48.009: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-msphf/pods/http:proxy-service-bsxbb-7dxpm:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 11:56:07.803: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12e32f3c-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-cjh25" to be "success or failure" May 3 11:56:07.826: INFO: Pod "downwardapi-volume-12e32f3c-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.332056ms May 3 11:56:09.829: INFO: Pod "downwardapi-volume-12e32f3c-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025794392s May 3 11:56:11.832: INFO: Pod "downwardapi-volume-12e32f3c-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028879757s STEP: Saw pod success May 3 11:56:11.832: INFO: Pod "downwardapi-volume-12e32f3c-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:56:11.835: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-12e32f3c-8d35-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 11:56:11.862: INFO: Waiting for pod downwardapi-volume-12e32f3c-8d35-11ea-b78d-0242ac110017 to disappear May 3 11:56:11.930: INFO: Pod downwardapi-volume-12e32f3c-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:56:11.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cjh25" for this suite. May 3 11:56:17.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:56:17.980: INFO: namespace: e2e-tests-downward-api-cjh25, resource: bindings, ignored listing per whitelist May 3 11:56:18.020: INFO: namespace e2e-tests-downward-api-cjh25 deletion completed in 6.086600761s • [SLOW TEST:10.516 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:56:18.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 3 11:56:26.175: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 3 11:56:26.185: INFO: Pod pod-with-poststart-http-hook still exists May 3 11:56:28.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 3 11:56:28.190: INFO: Pod pod-with-poststart-http-hook still exists May 3 11:56:30.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 3 11:56:30.189: INFO: Pod pod-with-poststart-http-hook still exists May 3 11:56:32.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 3 11:56:32.188: INFO: Pod pod-with-poststart-http-hook still exists May 3 11:56:34.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 3 11:56:34.189: INFO: Pod pod-with-poststart-http-hook still exists May 3 11:56:36.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 3 11:56:36.384: INFO: Pod pod-with-poststart-http-hook still exists May 3 11:56:38.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 3 11:56:38.190: INFO: Pod pod-with-poststart-http-hook still exists May 3 11:56:40.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 3 11:56:40.304: INFO: Pod pod-with-poststart-http-hook still exists May 3 11:56:42.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 3 11:56:42.944: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:56:42.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6kkql" for this suite. May 3 11:57:09.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:57:09.289: INFO: namespace: e2e-tests-container-lifecycle-hook-6kkql, resource: bindings, ignored listing per whitelist May 3 11:57:09.341: INFO: namespace e2e-tests-container-lifecycle-hook-6kkql deletion completed in 26.393434608s • [SLOW TEST:51.321 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:57:09.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-37a51fa5-8d35-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:57:09.635: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-l4sgz" to be "success or failure" May 3 11:57:09.696: INFO: Pod "pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 61.246278ms May 3 11:57:12.057: INFO: Pod "pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422508338s May 3 11:57:14.060: INFO: Pod "pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425863969s May 3 11:57:16.065: INFO: Pod "pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.430449364s May 3 11:57:18.088: INFO: Pod "pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.453022244s STEP: Saw pod success May 3 11:57:18.088: INFO: Pod "pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:57:18.091: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 3 11:57:18.174: INFO: Waiting for pod pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017 to disappear May 3 11:57:18.219: INFO: Pod pod-projected-configmaps-37b48de1-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:57:18.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l4sgz" for this suite. May 3 11:57:26.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:57:26.344: INFO: namespace: e2e-tests-projected-l4sgz, resource: bindings, ignored listing per whitelist May 3 11:57:26.364: INFO: namespace e2e-tests-projected-l4sgz deletion completed in 8.141645131s • [SLOW TEST:17.022 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:57:26.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0503 11:57:28.109105 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 3 11:57:28.109: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:57:28.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-79tg9" for this suite. May 3 11:57:34.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:57:34.215: INFO: namespace: e2e-tests-gc-79tg9, resource: bindings, ignored listing per whitelist May 3 11:57:34.283: INFO: namespace e2e-tests-gc-79tg9 deletion completed in 6.171051725s • [SLOW TEST:7.919 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:57:34.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-grgl STEP: Creating a pod to test atomic-volume-subpath May 3 11:57:34.416: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-grgl" in namespace "e2e-tests-subpath-9ng55" to be "success or failure" May 3 11:57:34.463: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Pending", Reason="", readiness=false. Elapsed: 46.892084ms May 3 11:57:36.467: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050760546s May 3 11:57:38.519: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10271392s May 3 11:57:40.523: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106961579s May 3 11:57:42.528: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Running", Reason="", readiness=false. Elapsed: 8.111452429s May 3 11:57:44.532: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Running", Reason="", readiness=false. Elapsed: 10.115910763s May 3 11:57:46.537: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Running", Reason="", readiness=false. Elapsed: 12.120610236s May 3 11:57:48.542: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Running", Reason="", readiness=false. Elapsed: 14.125209312s May 3 11:57:50.546: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Running", Reason="", readiness=false. Elapsed: 16.129717698s May 3 11:57:52.551: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Running", Reason="", readiness=false. Elapsed: 18.134590443s May 3 11:57:54.555: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Running", Reason="", readiness=false. Elapsed: 20.13820558s May 3 11:57:56.559: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Running", Reason="", readiness=false. Elapsed: 22.14284168s May 3 11:57:58.563: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Running", Reason="", readiness=false. Elapsed: 24.146228752s May 3 11:58:00.567: INFO: Pod "pod-subpath-test-secret-grgl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.15018856s STEP: Saw pod success May 3 11:58:00.567: INFO: Pod "pod-subpath-test-secret-grgl" satisfied condition "success or failure" May 3 11:58:00.569: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-grgl container test-container-subpath-secret-grgl: STEP: delete the pod May 3 11:58:00.707: INFO: Waiting for pod pod-subpath-test-secret-grgl to disappear May 3 11:58:00.763: INFO: Pod pod-subpath-test-secret-grgl no longer exists STEP: Deleting pod pod-subpath-test-secret-grgl May 3 11:58:00.763: INFO: Deleting pod "pod-subpath-test-secret-grgl" in namespace "e2e-tests-subpath-9ng55" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:58:00.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-9ng55" for this suite. May 3 11:58:06.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:58:06.847: INFO: namespace: e2e-tests-subpath-9ng55, resource: bindings, ignored listing per whitelist May 3 11:58:06.867: INFO: namespace e2e-tests-subpath-9ng55 deletion completed in 6.096457746s • [SLOW TEST:32.583 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:58:06.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 3 11:58:07.010: INFO: Waiting up to 5m0s for pod "pod-59f212aa-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-dgkz6" to be "success or failure" May 3 11:58:07.022: INFO: Pod "pod-59f212aa-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.988361ms May 3 11:58:09.106: INFO: Pod "pod-59f212aa-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096167117s May 3 11:58:11.110: INFO: Pod "pod-59f212aa-8d35-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.100364252s May 3 11:58:13.114: INFO: Pod "pod-59f212aa-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10433775s STEP: Saw pod success May 3 11:58:13.114: INFO: Pod "pod-59f212aa-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:58:13.117: INFO: Trying to get logs from node hunter-worker2 pod pod-59f212aa-8d35-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 11:58:13.179: INFO: Waiting for pod pod-59f212aa-8d35-11ea-b78d-0242ac110017 to disappear May 3 11:58:13.188: INFO: Pod pod-59f212aa-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:58:13.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dgkz6" for this suite. May 3 11:58:19.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:58:19.265: INFO: namespace: e2e-tests-emptydir-dgkz6, resource: bindings, ignored listing per whitelist May 3 11:58:19.279: INFO: namespace e2e-tests-emptydir-dgkz6 deletion completed in 6.08700816s • [SLOW TEST:12.412 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:58:19.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 3 11:58:19.435: INFO: Waiting up to 5m0s for pod "pod-61588b64-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-t8rqg" to be "success or failure" May 3 11:58:19.454: INFO: Pod "pod-61588b64-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.786103ms May 3 11:58:21.458: INFO: Pod "pod-61588b64-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022535354s May 3 11:58:23.462: INFO: Pod "pod-61588b64-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026258669s STEP: Saw pod success May 3 11:58:23.462: INFO: Pod "pod-61588b64-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:58:23.464: INFO: Trying to get logs from node hunter-worker2 pod pod-61588b64-8d35-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 11:58:23.497: INFO: Waiting for pod pod-61588b64-8d35-11ea-b78d-0242ac110017 to disappear May 3 11:58:23.511: INFO: Pod pod-61588b64-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:58:23.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-t8rqg" for this suite. May 3 11:58:29.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:58:29.592: INFO: namespace: e2e-tests-emptydir-t8rqg, resource: bindings, ignored listing per whitelist May 3 11:58:29.627: INFO: namespace e2e-tests-emptydir-t8rqg deletion completed in 6.111609729s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:58:29.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:58:35.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-4vrxv" for this suite. May 3 11:58:41.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:58:42.035: INFO: namespace: e2e-tests-namespaces-4vrxv, resource: bindings, ignored listing per whitelist May 3 11:58:42.055: INFO: namespace e2e-tests-namespaces-4vrxv deletion completed in 6.109512305s STEP: Destroying namespace "e2e-tests-nsdeletetest-77cqr" for this suite. May 3 11:58:42.058: INFO: Namespace e2e-tests-nsdeletetest-77cqr was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-6r7sq" for this suite. May 3 11:58:48.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:58:48.126: INFO: namespace: e2e-tests-nsdeletetest-6r7sq, resource: bindings, ignored listing per whitelist May 3 11:58:48.156: INFO: namespace e2e-tests-nsdeletetest-6r7sq deletion completed in 6.098725861s • [SLOW TEST:18.529 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:58:48.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-728692f2-8d35-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 11:58:48.263: INFO: Waiting up to 5m0s for pod "pod-secrets-728877e9-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-secrets-4q459" to be "success or failure" May 3 11:58:48.304: INFO: Pod "pod-secrets-728877e9-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 40.690608ms May 3 11:58:50.307: INFO: Pod "pod-secrets-728877e9-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043840816s May 3 11:58:52.312: INFO: Pod "pod-secrets-728877e9-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048443692s May 3 11:58:54.344: INFO: Pod "pod-secrets-728877e9-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081041225s STEP: Saw pod success May 3 11:58:54.344: INFO: Pod "pod-secrets-728877e9-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:58:54.429: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-728877e9-8d35-11ea-b78d-0242ac110017 container secret-env-test: STEP: delete the pod May 3 11:58:54.510: INFO: Waiting for pod pod-secrets-728877e9-8d35-11ea-b78d-0242ac110017 to disappear May 3 11:58:54.519: INFO: Pod pod-secrets-728877e9-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:58:54.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4q459" for this suite. May 3 11:59:00.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:59:00.556: INFO: namespace: e2e-tests-secrets-4q459, resource: bindings, ignored listing per whitelist May 3 11:59:00.613: INFO: namespace e2e-tests-secrets-4q459 deletion completed in 6.089989477s • [SLOW TEST:12.456 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:59:00.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 11:59:01.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a0153c7-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-x6nx8" to be "success or failure" May 3 11:59:01.148: INFO: Pod "downwardapi-volume-7a0153c7-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 86.480542ms May 3 11:59:03.269: INFO: Pod "downwardapi-volume-7a0153c7-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206677116s May 3 11:59:05.272: INFO: Pod "downwardapi-volume-7a0153c7-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210072273s May 3 11:59:07.275: INFO: Pod "downwardapi-volume-7a0153c7-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.213595384s STEP: Saw pod success May 3 11:59:07.275: INFO: Pod "downwardapi-volume-7a0153c7-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:59:07.278: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7a0153c7-8d35-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 11:59:07.311: INFO: Waiting for pod downwardapi-volume-7a0153c7-8d35-11ea-b78d-0242ac110017 to disappear May 3 11:59:07.334: INFO: Pod downwardapi-volume-7a0153c7-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:59:07.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x6nx8" for this suite. May 3 11:59:15.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:59:15.377: INFO: namespace: e2e-tests-downward-api-x6nx8, resource: bindings, ignored listing per whitelist May 3 11:59:15.412: INFO: namespace e2e-tests-downward-api-x6nx8 deletion completed in 8.074505358s • [SLOW TEST:14.798 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:59:15.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 3 11:59:15.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t7pqp' May 3 11:59:19.420: INFO: stderr: "" May 3 11:59:19.420: INFO: stdout: "pod/pause created\n" May 3 11:59:19.420: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 3 11:59:19.420: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-t7pqp" to be "running and ready" May 3 11:59:19.443: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 22.52734ms May 3 11:59:21.635: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214735006s May 3 11:59:23.639: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.218562942s May 3 11:59:23.639: INFO: Pod "pause" satisfied condition "running and ready" May 3 11:59:23.639: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 3 11:59:23.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-t7pqp' May 3 11:59:23.755: INFO: stderr: "" May 3 11:59:23.755: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 3 11:59:23.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-t7pqp' May 3 11:59:23.853: INFO: stderr: "" May 3 11:59:23.853: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 3 11:59:23.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-t7pqp' May 3 11:59:23.958: INFO: stderr: "" May 3 11:59:23.958: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 3 11:59:23.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-t7pqp' May 3 11:59:24.054: INFO: stderr: "" May 3 11:59:24.054: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 3 11:59:24.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t7pqp' May 3 11:59:24.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 11:59:24.161: INFO: stdout: "pod \"pause\" force deleted\n" May 3 11:59:24.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-t7pqp' May 3 11:59:24.261: INFO: stderr: "No resources found.\n" May 3 11:59:24.261: INFO: stdout: "" May 3 11:59:24.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-t7pqp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 3 11:59:24.351: INFO: stderr: "" May 3 11:59:24.351: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:59:24.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t7pqp" for this suite. May 3 11:59:32.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:59:32.826: INFO: namespace: e2e-tests-kubectl-t7pqp, resource: bindings, ignored listing per whitelist May 3 11:59:32.862: INFO: namespace e2e-tests-kubectl-t7pqp deletion completed in 8.506266181s • [SLOW TEST:17.451 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:59:32.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-8d8bee2a-8d35-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 11:59:33.601: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d8c6a0b-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-configmap-5rjps" to be "success or failure" May 3 11:59:33.644: INFO: Pod "pod-configmaps-8d8c6a0b-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 43.819856ms May 3 11:59:35.649: INFO: Pod "pod-configmaps-8d8c6a0b-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0485788s May 3 11:59:37.653: INFO: Pod "pod-configmaps-8d8c6a0b-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052534137s May 3 11:59:39.657: INFO: Pod "pod-configmaps-8d8c6a0b-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056510819s STEP: Saw pod success May 3 11:59:39.657: INFO: Pod "pod-configmaps-8d8c6a0b-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:59:39.659: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-8d8c6a0b-8d35-11ea-b78d-0242ac110017 container configmap-volume-test: STEP: delete the pod May 3 11:59:39.708: INFO: Waiting for pod pod-configmaps-8d8c6a0b-8d35-11ea-b78d-0242ac110017 to disappear May 3 11:59:39.718: INFO: Pod pod-configmaps-8d8c6a0b-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:59:39.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5rjps" for this suite. May 3 11:59:45.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:59:45.780: INFO: namespace: e2e-tests-configmap-5rjps, resource: bindings, ignored listing per whitelist May 3 11:59:45.800: INFO: namespace e2e-tests-configmap-5rjps deletion completed in 6.079954912s • [SLOW TEST:12.937 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:59:45.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 3 11:59:46.507: INFO: Waiting up to 5m0s for pod "pod-9520f7c2-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-z87k7" to be "success or failure" May 3 11:59:46.535: INFO: Pod "pod-9520f7c2-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.377888ms May 3 11:59:48.538: INFO: Pod "pod-9520f7c2-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031909227s May 3 11:59:50.562: INFO: Pod "pod-9520f7c2-8d35-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.055294033s May 3 11:59:52.566: INFO: Pod "pod-9520f7c2-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059110257s STEP: Saw pod success May 3 11:59:52.566: INFO: Pod "pod-9520f7c2-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 11:59:52.568: INFO: Trying to get logs from node hunter-worker pod pod-9520f7c2-8d35-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 11:59:52.600: INFO: Waiting for pod pod-9520f7c2-8d35-11ea-b78d-0242ac110017 to disappear May 3 11:59:52.689: INFO: Pod pod-9520f7c2-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:59:52.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-z87k7" for this suite. May 3 11:59:58.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 11:59:58.757: INFO: namespace: e2e-tests-emptydir-z87k7, resource: bindings, ignored listing per whitelist May 3 11:59:58.836: INFO: namespace e2e-tests-emptydir-z87k7 deletion completed in 6.142420182s • [SLOW TEST:13.036 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 11:59:58.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 3 11:59:59.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 3 11:59:59.174: INFO: stderr: "" May 3 11:59:59.174: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 11:59:59.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b9985" for this suite. May 3 12:00:05.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:00:05.293: INFO: namespace: e2e-tests-kubectl-b9985, resource: bindings, ignored listing per whitelist May 3 12:00:05.354: INFO: namespace e2e-tests-kubectl-b9985 deletion completed in 6.174803286s • [SLOW TEST:6.518 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:00:05.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 3 12:00:05.654: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:00:14.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-74f5c" for this suite. May 3 12:00:20.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:00:20.998: INFO: namespace: e2e-tests-init-container-74f5c, resource: bindings, ignored listing per whitelist May 3 12:00:21.007: INFO: namespace e2e-tests-init-container-74f5c deletion completed in 6.207886454s • [SLOW TEST:15.653 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:00:21.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:00:21.340: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 3 12:00:21.352: INFO: Number of nodes with available pods: 0 May 3 12:00:21.352: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 3 12:00:21.393: INFO: Number of nodes with available pods: 0 May 3 12:00:21.393: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:22.426: INFO: Number of nodes with available pods: 0 May 3 12:00:22.426: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:23.398: INFO: Number of nodes with available pods: 0 May 3 12:00:23.398: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:24.398: INFO: Number of nodes with available pods: 0 May 3 12:00:24.398: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:25.398: INFO: Number of nodes with available pods: 1 May 3 12:00:25.398: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 3 12:00:25.841: INFO: Number of nodes with available pods: 1 May 3 12:00:25.841: INFO: Number of running nodes: 0, number of available pods: 1 May 3 12:00:26.846: INFO: Number of nodes with available pods: 0 May 3 12:00:26.846: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 3 12:00:26.881: INFO: Number of nodes with available pods: 0 May 3 12:00:26.881: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:27.917: INFO: Number of nodes with available pods: 0 May 3 12:00:27.917: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:28.887: INFO: Number of nodes with available pods: 0 May 3 12:00:28.887: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:30.686: INFO: Number of nodes with available pods: 0 May 3 12:00:30.686: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:30.995: INFO: Number of nodes with available pods: 0 May 3 12:00:30.996: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:32.067: INFO: Number of nodes with available pods: 0 May 3 12:00:32.067: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:33.246: INFO: Number of nodes with available pods: 0 May 3 12:00:33.246: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:33.900: INFO: Number of nodes with available pods: 0 May 3 12:00:33.900: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:34.886: INFO: Number of nodes with available pods: 0 May 3 12:00:34.886: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:35.885: INFO: Number of nodes with available pods: 0 May 3 12:00:35.885: INFO: Node hunter-worker is running more than one daemon pod May 3 12:00:36.885: INFO: Number of nodes with available pods: 1 May 3 12:00:36.885: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8phh7, will wait for the garbage collector to delete the pods May 3 12:00:37.034: INFO: Deleting DaemonSet.extensions daemon-set took: 24.241533ms May 3 12:00:37.134: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.274409ms May 3 12:00:51.354: INFO: Number of nodes with available pods: 0 May 3 12:00:51.354: INFO: Number of running nodes: 0, number of available pods: 0 May 3 12:00:51.357: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8phh7/daemonsets","resourceVersion":"8530918"},"items":null} May 3 12:00:51.359: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8phh7/pods","resourceVersion":"8530918"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:00:51.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8phh7" for this suite. May 3 12:00:57.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:00:57.465: INFO: namespace: e2e-tests-daemonsets-8phh7, resource: bindings, ignored listing per whitelist May 3 12:00:57.495: INFO: namespace e2e-tests-daemonsets-8phh7 deletion completed in 6.091657298s • [SLOW TEST:36.488 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:00:57.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:00:57.708: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 3 12:00:57.712: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2hhgw/daemonsets","resourceVersion":"8530954"},"items":null} May 3 12:00:57.714: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2hhgw/pods","resourceVersion":"8530954"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:00:57.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-2hhgw" for this suite. May 3 12:01:03.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:01:03.765: INFO: namespace: e2e-tests-daemonsets-2hhgw, resource: bindings, ignored listing per whitelist May 3 12:01:03.809: INFO: namespace e2e-tests-daemonsets-2hhgw deletion completed in 6.086415605s S [SKIPPING] [6.314 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:00:57.708: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:01:03.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 3 12:01:04.430: INFO: created pod pod-service-account-defaultsa May 3 12:01:04.430: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 3 12:01:04.504: INFO: created pod pod-service-account-mountsa May 3 12:01:04.504: INFO: pod pod-service-account-mountsa service account token volume mount: true May 3 12:01:04.530: INFO: created pod pod-service-account-nomountsa May 3 12:01:04.530: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 3 12:01:04.584: INFO: created pod pod-service-account-defaultsa-mountspec May 3 12:01:04.584: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 3 12:01:04.601: INFO: created pod pod-service-account-mountsa-mountspec May 3 12:01:04.601: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 3 12:01:04.692: INFO: created pod pod-service-account-nomountsa-mountspec May 3 12:01:04.692: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 3 12:01:04.746: INFO: created pod pod-service-account-defaultsa-nomountspec May 3 12:01:04.746: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 3 12:01:04.800: INFO: created pod pod-service-account-mountsa-nomountspec May 3 12:01:04.800: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 3 12:01:04.848: INFO: created pod pod-service-account-nomountsa-nomountspec May 3 12:01:04.848: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:01:04.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-lc2fp" for this suite. May 3 12:01:37.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:01:37.055: INFO: namespace: e2e-tests-svcaccounts-lc2fp, resource: bindings, ignored listing per whitelist May 3 12:01:37.136: INFO: namespace e2e-tests-svcaccounts-lc2fp deletion completed in 32.205902357s • [SLOW TEST:33.326 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:01:37.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d75c96cf-8d35-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 12:01:37.488: INFO: Waiting up to 5m0s for pod "pod-configmaps-d75cfef8-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-configmap-bn6vd" to be "success or failure" May 3 12:01:37.636: INFO: Pod "pod-configmaps-d75cfef8-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 147.651552ms May 3 12:01:39.640: INFO: Pod "pod-configmaps-d75cfef8-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151648308s May 3 12:01:41.744: INFO: Pod "pod-configmaps-d75cfef8-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255666933s May 3 12:01:43.748: INFO: Pod "pod-configmaps-d75cfef8-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.259909765s STEP: Saw pod success May 3 12:01:43.748: INFO: Pod "pod-configmaps-d75cfef8-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:01:43.751: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d75cfef8-8d35-11ea-b78d-0242ac110017 container configmap-volume-test: STEP: delete the pod May 3 12:01:43.782: INFO: Waiting for pod pod-configmaps-d75cfef8-8d35-11ea-b78d-0242ac110017 to disappear May 3 12:01:43.792: INFO: Pod pod-configmaps-d75cfef8-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:01:43.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bn6vd" for this suite. May 3 12:01:49.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:01:49.872: INFO: namespace: e2e-tests-configmap-bn6vd, resource: bindings, ignored listing per whitelist May 3 12:01:49.876: INFO: namespace e2e-tests-configmap-bn6vd deletion completed in 6.080088466s • [SLOW TEST:12.740 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:01:49.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 3 12:01:50.010: INFO: Waiting up to 5m0s for pod "pod-dedbdb4c-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-9vrjh" to be "success or failure" May 3 12:01:50.013: INFO: Pod "pod-dedbdb4c-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.641346ms May 3 12:01:52.018: INFO: Pod "pod-dedbdb4c-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007873067s May 3 12:01:54.021: INFO: Pod "pod-dedbdb4c-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0115305s STEP: Saw pod success May 3 12:01:54.021: INFO: Pod "pod-dedbdb4c-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:01:54.024: INFO: Trying to get logs from node hunter-worker2 pod pod-dedbdb4c-8d35-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 12:01:54.183: INFO: Waiting for pod pod-dedbdb4c-8d35-11ea-b78d-0242ac110017 to disappear May 3 12:01:54.225: INFO: Pod pod-dedbdb4c-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:01:54.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9vrjh" for this suite. May 3 12:02:00.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:02:00.328: INFO: namespace: e2e-tests-emptydir-9vrjh, resource: bindings, ignored listing per whitelist May 3 12:02:00.332: INFO: namespace e2e-tests-emptydir-9vrjh deletion completed in 6.103152697s • [SLOW TEST:10.456 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:02:00.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0503 12:02:30.962499 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 3 12:02:30.962: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:02:30.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-gtdrz" for this suite. May 3 12:02:36.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:02:36.996: INFO: namespace: e2e-tests-gc-gtdrz, resource: bindings, ignored listing per whitelist May 3 12:02:37.055: INFO: namespace e2e-tests-gc-gtdrz deletion completed in 6.088645s • [SLOW TEST:36.722 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:02:37.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 3 12:02:37.343: INFO: Waiting up to 5m0s for pod "pod-fb1234b1-8d35-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-p59pb" to be "success or failure" May 3 12:02:37.348: INFO: Pod "pod-fb1234b1-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.512264ms May 3 12:02:39.352: INFO: Pod "pod-fb1234b1-8d35-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009617781s May 3 12:02:41.357: INFO: Pod "pod-fb1234b1-8d35-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014029002s STEP: Saw pod success May 3 12:02:41.357: INFO: Pod "pod-fb1234b1-8d35-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:02:41.361: INFO: Trying to get logs from node hunter-worker2 pod pod-fb1234b1-8d35-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 12:02:41.383: INFO: Waiting for pod pod-fb1234b1-8d35-11ea-b78d-0242ac110017 to disappear May 3 12:02:41.451: INFO: Pod pod-fb1234b1-8d35-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:02:41.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p59pb" for this suite. May 3 12:02:47.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:02:47.489: INFO: namespace: e2e-tests-emptydir-p59pb, resource: bindings, ignored listing per whitelist May 3 12:02:47.551: INFO: namespace e2e-tests-emptydir-p59pb deletion completed in 6.092280191s • [SLOW TEST:10.497 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:02:47.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 3 12:02:55.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:02:55.806: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:02:57.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:02:57.811: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:02:59.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:02:59.812: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:01.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:01.811: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:03.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:03.811: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:05.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:05.810: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:07.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:07.811: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:09.807: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:09.811: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:11.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:11.810: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:13.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:13.811: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:15.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:15.811: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:17.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:17.811: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:19.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:19.811: INFO: Pod pod-with-poststart-exec-hook still exists May 3 12:03:21.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 3 12:03:21.811: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:03:21.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jr6vc" for this suite. May 3 12:03:43.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:03:43.887: INFO: namespace: e2e-tests-container-lifecycle-hook-jr6vc, resource: bindings, ignored listing per whitelist May 3 12:03:43.920: INFO: namespace e2e-tests-container-lifecycle-hook-jr6vc deletion completed in 22.105545045s • [SLOW TEST:56.368 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:03:43.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 3 12:03:44.020: INFO: Waiting up to 5m0s for pod "pod-22d0e4fd-8d36-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-8v8xn" to be "success or failure" May 3 12:03:44.024: INFO: Pod "pod-22d0e4fd-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060448ms May 3 12:03:46.028: INFO: Pod "pod-22d0e4fd-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008071457s May 3 12:03:48.032: INFO: Pod "pod-22d0e4fd-8d36-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011750572s STEP: Saw pod success May 3 12:03:48.032: INFO: Pod "pod-22d0e4fd-8d36-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:03:48.035: INFO: Trying to get logs from node hunter-worker pod pod-22d0e4fd-8d36-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 12:03:48.112: INFO: Waiting for pod pod-22d0e4fd-8d36-11ea-b78d-0242ac110017 to disappear May 3 12:03:48.126: INFO: Pod pod-22d0e4fd-8d36-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:03:48.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8v8xn" for this suite. May 3 12:03:54.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:03:54.255: INFO: namespace: e2e-tests-emptydir-8v8xn, resource: bindings, ignored listing per whitelist May 3 12:03:54.260: INFO: namespace e2e-tests-emptydir-8v8xn deletion completed in 6.130148173s • [SLOW TEST:10.339 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:03:54.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-gjg2f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjg2f to expose endpoints map[] May 3 12:03:54.438: INFO: Get endpoints failed (25.286826ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 3 12:03:55.443: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjg2f exposes endpoints map[] (1.029415754s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-gjg2f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjg2f to expose endpoints map[pod1:[100]] May 3 12:03:58.520: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjg2f exposes endpoints map[pod1:[100]] (3.070175006s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-gjg2f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjg2f to expose endpoints map[pod1:[100] pod2:[101]] May 3 12:04:01.592: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjg2f exposes endpoints map[pod1:[100] pod2:[101]] (3.068208199s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-gjg2f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjg2f to expose endpoints map[pod2:[101]] May 3 12:04:02.662: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjg2f exposes endpoints map[pod2:[101]] (1.066468744s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-gjg2f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjg2f to expose endpoints map[] May 3 12:04:02.704: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjg2f exposes endpoints map[] (36.960529ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:04:02.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-gjg2f" for this suite. May 3 12:04:08.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:04:08.852: INFO: namespace: e2e-tests-services-gjg2f, resource: bindings, ignored listing per whitelist May 3 12:04:08.906: INFO: namespace e2e-tests-services-gjg2f deletion completed in 6.119638069s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:14.646 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:04:08.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 3 12:04:17.102: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 3 12:04:17.159: INFO: Pod pod-with-prestop-http-hook still exists May 3 12:04:19.160: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 3 12:04:19.164: INFO: Pod pod-with-prestop-http-hook still exists May 3 12:04:21.160: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 3 12:04:21.164: INFO: Pod pod-with-prestop-http-hook still exists May 3 12:04:23.160: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 3 12:04:23.164: INFO: Pod pod-with-prestop-http-hook still exists May 3 12:04:25.160: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 3 12:04:25.163: INFO: Pod pod-with-prestop-http-hook still exists May 3 12:04:27.160: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 3 12:04:27.164: INFO: Pod pod-with-prestop-http-hook still exists May 3 12:04:29.160: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 3 12:04:29.164: INFO: Pod pod-with-prestop-http-hook still exists May 3 12:04:31.160: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 3 12:04:31.164: INFO: Pod pod-with-prestop-http-hook still exists May 3 12:04:33.160: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 3 12:04:33.164: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:04:33.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6wlqx" for this suite. May 3 12:04:55.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:04:55.255: INFO: namespace: e2e-tests-container-lifecycle-hook-6wlqx, resource: bindings, ignored listing per whitelist May 3 12:04:55.269: INFO: namespace e2e-tests-container-lifecycle-hook-6wlqx deletion completed in 22.093949364s • [SLOW TEST:46.362 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:04:55.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:04:55.415: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:04:59.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wc5rj" for this suite. May 3 12:05:39.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:05:39.681: INFO: namespace: e2e-tests-pods-wc5rj, resource: bindings, ignored listing per whitelist May 3 12:05:39.735: INFO: namespace e2e-tests-pods-wc5rj deletion completed in 40.140448865s • [SLOW TEST:44.467 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:05:39.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:05:39.823: INFO: Creating deployment "test-recreate-deployment" May 3 12:05:39.839: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 3 12:05:39.847: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 3 12:05:41.856: INFO: Waiting deployment "test-recreate-deployment" to complete May 3 12:05:41.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724104339, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724104339, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724104339, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724104339, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 3 12:05:43.863: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 3 12:05:43.871: INFO: Updating deployment test-recreate-deployment May 3 12:05:43.871: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 3 12:05:44.335: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-85jn8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-85jn8/deployments/test-recreate-deployment,UID:67d8f6bd-8d36-11ea-99e8-0242ac110002,ResourceVersion:8531969,Generation:2,CreationTimestamp:2020-05-03 12:05:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-03 12:05:44 +0000 UTC 2020-05-03 12:05:44 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-03 12:05:44 +0000 UTC 2020-05-03 12:05:39 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 3 12:05:44.377: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-85jn8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-85jn8/replicasets/test-recreate-deployment-589c4bfd,UID:6a542672-8d36-11ea-99e8-0242ac110002,ResourceVersion:8531966,Generation:1,CreationTimestamp:2020-05-03 12:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 67d8f6bd-8d36-11ea-99e8-0242ac110002 0xc0021ccf8f 0xc0021ccfa0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 3 12:05:44.377: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 3 12:05:44.378: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-85jn8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-85jn8/replicasets/test-recreate-deployment-5bf7f65dc,UID:67dc9ac3-8d36-11ea-99e8-0242ac110002,ResourceVersion:8531958,Generation:2,CreationTimestamp:2020-05-03 12:05:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 67d8f6bd-8d36-11ea-99e8-0242ac110002 0xc0021cd300 0xc0021cd301}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 3 12:05:44.388: INFO: Pod "test-recreate-deployment-589c4bfd-qmdcm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-qmdcm,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-85jn8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-85jn8/pods/test-recreate-deployment-589c4bfd-qmdcm,UID:6a571087-8d36-11ea-99e8-0242ac110002,ResourceVersion:8531970,Generation:0,CreationTimestamp:2020-05-03 12:05:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 6a542672-8d36-11ea-99e8-0242ac110002 0xc001d1829f 0xc001d182b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5vh8x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vh8x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vh8x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d18320} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d18340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 12:05:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 12:05:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-03 12:05:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 12:05:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-03 12:05:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:05:44.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-85jn8" for this suite. May 3 12:05:50.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:05:50.590: INFO: namespace: e2e-tests-deployment-85jn8, resource: bindings, ignored listing per whitelist May 3 12:05:50.615: INFO: namespace e2e-tests-deployment-85jn8 deletion completed in 6.223146766s • [SLOW TEST:10.880 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:05:50.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:05:50.863: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6e63d1a6-8d36-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0023ef712), BlockOwnerDeletion:(*bool)(0xc0023ef713)}} May 3 12:05:50.940: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6e5a4b9c-8d36-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0021c1ad2), BlockOwnerDeletion:(*bool)(0xc0021c1ad3)}} May 3 12:05:50.943: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6e5e9aa5-8d36-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0023efc42), BlockOwnerDeletion:(*bool)(0xc0023efc43)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:05:55.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-dlbjk" for this suite. May 3 12:06:01.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:06:02.065: INFO: namespace: e2e-tests-gc-dlbjk, resource: bindings, ignored listing per whitelist May 3 12:06:02.084: INFO: namespace e2e-tests-gc-dlbjk deletion completed in 6.108212857s • [SLOW TEST:11.468 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:06:02.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 12:06:02.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-755914f0-8d36-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-l28st" to be "success or failure" May 3 12:06:02.976: INFO: Pod "downwardapi-volume-755914f0-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 170.474626ms May 3 12:06:04.979: INFO: Pod "downwardapi-volume-755914f0-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173375204s May 3 12:06:06.983: INFO: Pod "downwardapi-volume-755914f0-8d36-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17743047s STEP: Saw pod success May 3 12:06:06.983: INFO: Pod "downwardapi-volume-755914f0-8d36-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:06:06.986: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-755914f0-8d36-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 12:06:07.085: INFO: Waiting for pod downwardapi-volume-755914f0-8d36-11ea-b78d-0242ac110017 to disappear May 3 12:06:07.114: INFO: Pod downwardapi-volume-755914f0-8d36-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:06:07.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l28st" for this suite. May 3 12:06:13.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:06:13.326: INFO: namespace: e2e-tests-downward-api-l28st, resource: bindings, ignored listing per whitelist May 3 12:06:13.464: INFO: namespace e2e-tests-downward-api-l28st deletion completed in 6.346200001s • [SLOW TEST:11.380 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:06:13.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-7bf8f13b-8d36-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume configMaps May 3 12:06:13.618: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7bfc5946-8d36-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-l8bbr" to be "success or failure" May 3 12:06:13.637: INFO: Pod "pod-projected-configmaps-7bfc5946-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.670148ms May 3 12:06:15.639: INFO: Pod "pod-projected-configmaps-7bfc5946-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021321848s May 3 12:06:17.643: INFO: Pod "pod-projected-configmaps-7bfc5946-8d36-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024643708s STEP: Saw pod success May 3 12:06:17.643: INFO: Pod "pod-projected-configmaps-7bfc5946-8d36-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:06:17.645: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-7bfc5946-8d36-11ea-b78d-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 3 12:06:17.700: INFO: Waiting for pod pod-projected-configmaps-7bfc5946-8d36-11ea-b78d-0242ac110017 to disappear May 3 12:06:17.708: INFO: Pod pod-projected-configmaps-7bfc5946-8d36-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:06:17.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l8bbr" for this suite. May 3 12:06:23.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:06:23.861: INFO: namespace: e2e-tests-projected-l8bbr, resource: bindings, ignored listing per whitelist May 3 12:06:23.872: INFO: namespace e2e-tests-projected-l8bbr deletion completed in 6.161385241s • [SLOW TEST:10.408 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:06:23.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 12:06:24.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82533bcb-8d36-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-lqmrm" to be "success or failure" May 3 12:06:24.452: INFO: Pod "downwardapi-volume-82533bcb-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 53.512451ms May 3 12:06:26.456: INFO: Pod "downwardapi-volume-82533bcb-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057355723s May 3 12:06:28.460: INFO: Pod "downwardapi-volume-82533bcb-8d36-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061445364s STEP: Saw pod success May 3 12:06:28.460: INFO: Pod "downwardapi-volume-82533bcb-8d36-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:06:28.462: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-82533bcb-8d36-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 12:06:28.841: INFO: Waiting for pod downwardapi-volume-82533bcb-8d36-11ea-b78d-0242ac110017 to disappear May 3 12:06:28.889: INFO: Pod downwardapi-volume-82533bcb-8d36-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:06:28.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lqmrm" for this suite. May 3 12:06:35.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:06:35.062: INFO: namespace: e2e-tests-downward-api-lqmrm, resource: bindings, ignored listing per whitelist May 3 12:06:35.098: INFO: namespace e2e-tests-downward-api-lqmrm deletion completed in 6.203648348s • [SLOW TEST:11.226 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:06:35.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:06:35.262: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.755456ms) May 3 12:06:35.265: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.440425ms) May 3 12:06:35.268: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.895877ms) May 3 12:06:35.271: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.778823ms) May 3 12:06:35.274: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.145786ms) May 3 12:06:35.278: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.818661ms) May 3 12:06:35.281: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.314272ms) May 3 12:06:35.285: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.049059ms) May 3 12:06:35.288: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.574848ms) May 3 12:06:35.291: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.083528ms) May 3 12:06:35.294: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.941838ms) May 3 12:06:35.297: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.629189ms) May 3 12:06:35.299: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.523715ms) May 3 12:06:35.302: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.722953ms) May 3 12:06:35.305: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.258889ms) May 3 12:06:35.307: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.863525ms) May 3 12:06:35.342: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 34.562195ms) May 3 12:06:35.345: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.116727ms) May 3 12:06:35.348: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.354174ms) May 3 12:06:35.350: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.877527ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:06:35.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-hgz98" for this suite. May 3 12:06:41.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:06:41.405: INFO: namespace: e2e-tests-proxy-hgz98, resource: bindings, ignored listing per whitelist May 3 12:06:41.462: INFO: namespace e2e-tests-proxy-hgz98 deletion completed in 6.108456633s • [SLOW TEST:6.364 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:06:41.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 3 12:06:47.647: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-8cac5e08-8d36-11ea-b78d-0242ac110017,GenerateName:,Namespace:e2e-tests-events-nw6w8,SelfLink:/api/v1/namespaces/e2e-tests-events-nw6w8/pods/send-events-8cac5e08-8d36-11ea-b78d-0242ac110017,UID:8cace807-8d36-11ea-99e8-0242ac110002,ResourceVersion:8532257,Generation:0,CreationTimestamp:2020-05-03 12:06:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 607845043,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-d5btf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d5btf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-d5btf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001392020} {node.kubernetes.io/unreachable Exists NoExecute 0xc001392040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 12:06:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 12:06:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 12:06:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-03 12:06:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.112,StartTime:2020-05-03 12:06:41 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-03 12:06:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://f62b111a4158e5eb5ef661de3fdf3e4c2da3d5060766ed2602553b9b49137e31}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 3 12:06:49.968: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 3 12:06:51.972: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:06:51.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-nw6w8" for this suite. May 3 12:07:30.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:07:30.289: INFO: namespace: e2e-tests-events-nw6w8, resource: bindings, ignored listing per whitelist May 3 12:07:30.340: INFO: namespace e2e-tests-events-nw6w8 deletion completed in 38.249029486s • [SLOW TEST:48.878 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:07:30.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-a9cb182d-8d36-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 12:07:30.483: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9ccfe0f-8d36-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-6ml5q" to be "success or failure" May 3 12:07:30.487: INFO: Pod "pod-projected-secrets-a9ccfe0f-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061266ms May 3 12:07:32.491: INFO: Pod "pod-projected-secrets-a9ccfe0f-8d36-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008222691s May 3 12:07:34.496: INFO: Pod "pod-projected-secrets-a9ccfe0f-8d36-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012912021s STEP: Saw pod success May 3 12:07:34.496: INFO: Pod "pod-projected-secrets-a9ccfe0f-8d36-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:07:34.500: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-a9ccfe0f-8d36-11ea-b78d-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 3 12:07:34.576: INFO: Waiting for pod pod-projected-secrets-a9ccfe0f-8d36-11ea-b78d-0242ac110017 to disappear May 3 12:07:34.584: INFO: Pod pod-projected-secrets-a9ccfe0f-8d36-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:07:34.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6ml5q" for this suite. May 3 12:07:40.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:07:40.636: INFO: namespace: e2e-tests-projected-6ml5q, resource: bindings, ignored listing per whitelist May 3 12:07:40.673: INFO: namespace e2e-tests-projected-6ml5q deletion completed in 6.086427504s • [SLOW TEST:10.333 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:07:40.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-j244 STEP: Creating a pod to test atomic-volume-subpath May 3 12:07:40.875: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-j244" in namespace "e2e-tests-subpath-lz2jb" to be "success or failure" May 3 12:07:40.896: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Pending", Reason="", readiness=false. Elapsed: 21.820501ms May 3 12:07:43.038: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163675161s May 3 12:07:45.042: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167893469s May 3 12:07:47.047: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172166055s May 3 12:07:49.051: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176289888s May 3 12:07:51.055: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Pending", Reason="", readiness=false. Elapsed: 10.180635831s May 3 12:07:53.059: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Pending", Reason="", readiness=false. Elapsed: 12.18424642s May 3 12:07:55.063: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Running", Reason="", readiness=false. Elapsed: 14.188607225s May 3 12:07:57.067: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Running", Reason="", readiness=false. Elapsed: 16.192441929s May 3 12:07:59.070: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Running", Reason="", readiness=false. Elapsed: 18.195354239s May 3 12:08:01.073: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Running", Reason="", readiness=false. Elapsed: 20.198610487s May 3 12:08:03.078: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Running", Reason="", readiness=false. Elapsed: 22.203366215s May 3 12:08:05.082: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Running", Reason="", readiness=false. Elapsed: 24.207839709s May 3 12:08:07.086: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Running", Reason="", readiness=false. Elapsed: 26.211562799s May 3 12:08:09.091: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Running", Reason="", readiness=false. Elapsed: 28.216081346s May 3 12:08:11.095: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Running", Reason="", readiness=false. Elapsed: 30.219969085s May 3 12:08:13.098: INFO: Pod "pod-subpath-test-downwardapi-j244": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.223781928s STEP: Saw pod success May 3 12:08:13.098: INFO: Pod "pod-subpath-test-downwardapi-j244" satisfied condition "success or failure" May 3 12:08:13.100: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-j244 container test-container-subpath-downwardapi-j244: STEP: delete the pod May 3 12:08:13.150: INFO: Waiting for pod pod-subpath-test-downwardapi-j244 to disappear May 3 12:08:13.265: INFO: Pod pod-subpath-test-downwardapi-j244 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-j244 May 3 12:08:13.265: INFO: Deleting pod "pod-subpath-test-downwardapi-j244" in namespace "e2e-tests-subpath-lz2jb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:08:13.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lz2jb" for this suite. May 3 12:08:19.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:08:19.428: INFO: namespace: e2e-tests-subpath-lz2jb, resource: bindings, ignored listing per whitelist May 3 12:08:19.438: INFO: namespace e2e-tests-subpath-lz2jb deletion completed in 6.166714077s • [SLOW TEST:38.764 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:08:19.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-7jhvs I0503 12:08:19.570830 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-7jhvs, replica count: 1 I0503 12:08:20.621451 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0503 12:08:21.621616 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0503 12:08:22.621840 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0503 12:08:23.622080 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 3 12:08:23.757: INFO: Created: latency-svc-nczzn May 3 12:08:23.771: INFO: Got endpoints: latency-svc-nczzn [48.704696ms] May 3 12:08:23.888: INFO: Created: latency-svc-28grs May 3 12:08:23.925: INFO: Got endpoints: latency-svc-28grs [153.851699ms] May 3 12:08:23.954: INFO: Created: latency-svc-n2j8k May 3 12:08:23.984: INFO: Got endpoints: latency-svc-n2j8k [213.426259ms] May 3 12:08:24.043: INFO: Created: latency-svc-v8xq7 May 3 12:08:24.055: INFO: Got endpoints: latency-svc-v8xq7 [284.208968ms] May 3 12:08:24.079: INFO: Created: latency-svc-9cjp6 May 3 12:08:24.115: INFO: Got endpoints: latency-svc-9cjp6 [344.08736ms] May 3 12:08:24.243: INFO: Created: latency-svc-vw8fx May 3 12:08:24.247: INFO: Got endpoints: latency-svc-vw8fx [476.202554ms] May 3 12:08:24.304: INFO: Created: latency-svc-zsr2m May 3 12:08:24.516: INFO: Got endpoints: latency-svc-zsr2m [745.55123ms] May 3 12:08:24.531: INFO: Created: latency-svc-jwd7c May 3 12:08:24.548: INFO: Got endpoints: latency-svc-jwd7c [776.742752ms] May 3 12:08:24.602: INFO: Created: latency-svc-vf89f May 3 12:08:24.678: INFO: Got endpoints: latency-svc-vf89f [907.419439ms] May 3 12:08:24.717: INFO: Created: latency-svc-b72gl May 3 12:08:24.733: INFO: Got endpoints: latency-svc-b72gl [961.98845ms] May 3 12:08:24.804: INFO: Created: latency-svc-nbhd4 May 3 12:08:24.811: INFO: Got endpoints: latency-svc-nbhd4 [1.039819687s] May 3 12:08:24.848: INFO: Created: latency-svc-pktsf May 3 12:08:24.871: INFO: Got endpoints: latency-svc-pktsf [1.100219965s] May 3 12:08:24.954: INFO: Created: latency-svc-6gqjm May 3 12:08:24.961: INFO: Got endpoints: latency-svc-6gqjm [1.190241494s] May 3 12:08:25.004: INFO: Created: latency-svc-gp5dn May 3 12:08:25.050: INFO: Got endpoints: latency-svc-gp5dn [1.27871504s] May 3 12:08:25.127: INFO: Created: latency-svc-cmw8f May 3 12:08:25.144: INFO: Got endpoints: latency-svc-cmw8f [182.492524ms] May 3 12:08:25.223: INFO: Created: latency-svc-7m9pf May 3 12:08:25.265: INFO: Got endpoints: latency-svc-7m9pf [1.493942422s] May 3 12:08:25.294: INFO: Created: latency-svc-v8fc4 May 3 12:08:25.304: INFO: Got endpoints: latency-svc-v8fc4 [1.533236797s] May 3 12:08:25.340: INFO: Created: latency-svc-dqx7s May 3 12:08:25.359: INFO: Got endpoints: latency-svc-dqx7s [1.434336324s] May 3 12:08:25.415: INFO: Created: latency-svc-9j82r May 3 12:08:25.431: INFO: Got endpoints: latency-svc-9j82r [1.447162978s] May 3 12:08:25.463: INFO: Created: latency-svc-6rvq4 May 3 12:08:25.474: INFO: Got endpoints: latency-svc-6rvq4 [1.418593868s] May 3 12:08:25.496: INFO: Created: latency-svc-qnwqh May 3 12:08:25.514: INFO: Got endpoints: latency-svc-qnwqh [1.399043233s] May 3 12:08:25.613: INFO: Created: latency-svc-bsmgp May 3 12:08:25.617: INFO: Got endpoints: latency-svc-bsmgp [1.370158093s] May 3 12:08:25.837: INFO: Created: latency-svc-7xgjr May 3 12:08:25.873: INFO: Got endpoints: latency-svc-7xgjr [1.356707018s] May 3 12:08:25.996: INFO: Created: latency-svc-sblkr May 3 12:08:25.998: INFO: Got endpoints: latency-svc-sblkr [1.450224479s] May 3 12:08:26.205: INFO: Created: latency-svc-rwgm6 May 3 12:08:26.215: INFO: Got endpoints: latency-svc-rwgm6 [1.536946015s] May 3 12:08:26.272: INFO: Created: latency-svc-js2mw May 3 12:08:26.287: INFO: Got endpoints: latency-svc-js2mw [1.553897736s] May 3 12:08:26.399: INFO: Created: latency-svc-cklzg May 3 12:08:26.455: INFO: Got endpoints: latency-svc-cklzg [1.644634197s] May 3 12:08:27.233: INFO: Created: latency-svc-755vh May 3 12:08:27.276: INFO: Got endpoints: latency-svc-755vh [2.404827916s] May 3 12:08:27.473: INFO: Created: latency-svc-prptp May 3 12:08:27.756: INFO: Got endpoints: latency-svc-prptp [2.706055142s] May 3 12:08:27.826: INFO: Created: latency-svc-tdxnn May 3 12:08:28.037: INFO: Got endpoints: latency-svc-tdxnn [2.893530266s] May 3 12:08:28.188: INFO: Created: latency-svc-dn68v May 3 12:08:28.192: INFO: Got endpoints: latency-svc-dn68v [2.927015408s] May 3 12:08:28.247: INFO: Created: latency-svc-k2hk8 May 3 12:08:28.272: INFO: Got endpoints: latency-svc-k2hk8 [2.967922304s] May 3 12:08:28.410: INFO: Created: latency-svc-lm27l May 3 12:08:28.579: INFO: Got endpoints: latency-svc-lm27l [3.219697484s] May 3 12:08:28.586: INFO: Created: latency-svc-lpg5r May 3 12:08:28.626: INFO: Got endpoints: latency-svc-lpg5r [3.194416344s] May 3 12:08:28.727: INFO: Created: latency-svc-29psj May 3 12:08:28.731: INFO: Got endpoints: latency-svc-29psj [3.257490228s] May 3 12:08:28.943: INFO: Created: latency-svc-29r2l May 3 12:08:29.238: INFO: Created: latency-svc-j5bnw May 3 12:08:29.238: INFO: Got endpoints: latency-svc-29r2l [3.723858311s] May 3 12:08:29.349: INFO: Got endpoints: latency-svc-j5bnw [3.731867249s] May 3 12:08:29.582: INFO: Created: latency-svc-f8x8x May 3 12:08:29.624: INFO: Got endpoints: latency-svc-f8x8x [3.750377704s] May 3 12:08:29.831: INFO: Created: latency-svc-4hgf9 May 3 12:08:29.875: INFO: Got endpoints: latency-svc-4hgf9 [3.876842911s] May 3 12:08:29.971: INFO: Created: latency-svc-7b46p May 3 12:08:29.976: INFO: Got endpoints: latency-svc-7b46p [3.760254585s] May 3 12:08:30.030: INFO: Created: latency-svc-x7hkz May 3 12:08:30.048: INFO: Got endpoints: latency-svc-x7hkz [3.760820107s] May 3 12:08:30.157: INFO: Created: latency-svc-xbr88 May 3 12:08:30.192: INFO: Got endpoints: latency-svc-xbr88 [3.736187093s] May 3 12:08:30.254: INFO: Created: latency-svc-jb2hm May 3 12:08:30.301: INFO: Got endpoints: latency-svc-jb2hm [3.024548791s] May 3 12:08:30.333: INFO: Created: latency-svc-b6vpg May 3 12:08:30.342: INFO: Got endpoints: latency-svc-b6vpg [2.586202303s] May 3 12:08:30.373: INFO: Created: latency-svc-pn7xf May 3 12:08:30.380: INFO: Got endpoints: latency-svc-pn7xf [2.342629643s] May 3 12:08:30.452: INFO: Created: latency-svc-5nlxl May 3 12:08:30.464: INFO: Got endpoints: latency-svc-5nlxl [2.272399499s] May 3 12:08:30.498: INFO: Created: latency-svc-t66bk May 3 12:08:30.507: INFO: Got endpoints: latency-svc-t66bk [2.234313768s] May 3 12:08:30.534: INFO: Created: latency-svc-flqzk May 3 12:08:30.543: INFO: Got endpoints: latency-svc-flqzk [1.964495738s] May 3 12:08:30.600: INFO: Created: latency-svc-csqp9 May 3 12:08:30.604: INFO: Got endpoints: latency-svc-csqp9 [1.977834116s] May 3 12:08:30.638: INFO: Created: latency-svc-f75k2 May 3 12:08:30.668: INFO: Got endpoints: latency-svc-f75k2 [1.936868245s] May 3 12:08:30.750: INFO: Created: latency-svc-pmwct May 3 12:08:30.753: INFO: Got endpoints: latency-svc-pmwct [1.515280629s] May 3 12:08:30.811: INFO: Created: latency-svc-x4chg May 3 12:08:30.821: INFO: Got endpoints: latency-svc-x4chg [1.471514055s] May 3 12:08:30.846: INFO: Created: latency-svc-c449t May 3 12:08:30.888: INFO: Got endpoints: latency-svc-c449t [1.263946766s] May 3 12:08:30.895: INFO: Created: latency-svc-4jch7 May 3 12:08:30.911: INFO: Got endpoints: latency-svc-4jch7 [1.036212988s] May 3 12:08:30.943: INFO: Created: latency-svc-xjmcr May 3 12:08:30.960: INFO: Got endpoints: latency-svc-xjmcr [984.043802ms] May 3 12:08:30.979: INFO: Created: latency-svc-zqxvz May 3 12:08:31.025: INFO: Got endpoints: latency-svc-zqxvz [977.393738ms] May 3 12:08:31.038: INFO: Created: latency-svc-cqjvv May 3 12:08:31.057: INFO: Got endpoints: latency-svc-cqjvv [865.211886ms] May 3 12:08:31.098: INFO: Created: latency-svc-ds8dn May 3 12:08:31.175: INFO: Got endpoints: latency-svc-ds8dn [873.935292ms] May 3 12:08:31.183: INFO: Created: latency-svc-glp87 May 3 12:08:31.202: INFO: Got endpoints: latency-svc-glp87 [859.436348ms] May 3 12:08:31.243: INFO: Created: latency-svc-wg47h May 3 12:08:31.262: INFO: Got endpoints: latency-svc-wg47h [881.630067ms] May 3 12:08:31.319: INFO: Created: latency-svc-hwhll May 3 12:08:31.323: INFO: Got endpoints: latency-svc-hwhll [858.861889ms] May 3 12:08:31.356: INFO: Created: latency-svc-vnwq4 May 3 12:08:31.364: INFO: Got endpoints: latency-svc-vnwq4 [857.774339ms] May 3 12:08:31.392: INFO: Created: latency-svc-p2v48 May 3 12:08:31.401: INFO: Got endpoints: latency-svc-p2v48 [857.057238ms] May 3 12:08:31.457: INFO: Created: latency-svc-rrgd4 May 3 12:08:31.470: INFO: Got endpoints: latency-svc-rrgd4 [865.851187ms] May 3 12:08:31.535: INFO: Created: latency-svc-wnnsj May 3 12:08:31.551: INFO: Got endpoints: latency-svc-wnnsj [883.282492ms] May 3 12:08:31.618: INFO: Created: latency-svc-shz7b May 3 12:08:31.622: INFO: Got endpoints: latency-svc-shz7b [868.597082ms] May 3 12:08:31.651: INFO: Created: latency-svc-8wrc8 May 3 12:08:31.684: INFO: Got endpoints: latency-svc-8wrc8 [863.414627ms] May 3 12:08:31.711: INFO: Created: latency-svc-jnj8j May 3 12:08:31.762: INFO: Got endpoints: latency-svc-jnj8j [874.304238ms] May 3 12:08:31.765: INFO: Created: latency-svc-9psq2 May 3 12:08:31.782: INFO: Got endpoints: latency-svc-9psq2 [871.029408ms] May 3 12:08:31.811: INFO: Created: latency-svc-wfg69 May 3 12:08:31.825: INFO: Got endpoints: latency-svc-wfg69 [865.12206ms] May 3 12:08:31.964: INFO: Created: latency-svc-zcn7s May 3 12:08:31.968: INFO: Got endpoints: latency-svc-zcn7s [942.777003ms] May 3 12:08:32.035: INFO: Created: latency-svc-dtr7h May 3 12:08:32.053: INFO: Got endpoints: latency-svc-dtr7h [996.297871ms] May 3 12:08:32.141: INFO: Created: latency-svc-hf44v May 3 12:08:32.155: INFO: Got endpoints: latency-svc-hf44v [980.403314ms] May 3 12:08:32.191: INFO: Created: latency-svc-6pbq4 May 3 12:08:32.246: INFO: Got endpoints: latency-svc-6pbq4 [1.04483084s] May 3 12:08:32.292: INFO: Created: latency-svc-hd8cq May 3 12:08:32.318: INFO: Got endpoints: latency-svc-hd8cq [1.056254602s] May 3 12:08:32.406: INFO: Created: latency-svc-8j4v2 May 3 12:08:32.414: INFO: Got endpoints: latency-svc-8j4v2 [1.090654552s] May 3 12:08:32.466: INFO: Created: latency-svc-2sd99 May 3 12:08:32.481: INFO: Got endpoints: latency-svc-2sd99 [1.11606555s] May 3 12:08:32.551: INFO: Created: latency-svc-2nhfh May 3 12:08:32.560: INFO: Got endpoints: latency-svc-2nhfh [1.15878142s] May 3 12:08:32.616: INFO: Created: latency-svc-vwz4m May 3 12:08:32.678: INFO: Got endpoints: latency-svc-vwz4m [1.208185278s] May 3 12:08:32.706: INFO: Created: latency-svc-vbdfd May 3 12:08:32.743: INFO: Got endpoints: latency-svc-vbdfd [1.191734972s] May 3 12:08:32.828: INFO: Created: latency-svc-cvxq2 May 3 12:08:32.857: INFO: Got endpoints: latency-svc-cvxq2 [1.235429934s] May 3 12:08:32.894: INFO: Created: latency-svc-565sw May 3 12:08:32.923: INFO: Got endpoints: latency-svc-565sw [1.238994865s] May 3 12:08:32.984: INFO: Created: latency-svc-8ttqm May 3 12:08:32.987: INFO: Got endpoints: latency-svc-8ttqm [1.224648861s] May 3 12:08:33.051: INFO: Created: latency-svc-2c5gz May 3 12:08:33.065: INFO: Got endpoints: latency-svc-2c5gz [1.282510348s] May 3 12:08:33.134: INFO: Created: latency-svc-p9btd May 3 12:08:33.143: INFO: Got endpoints: latency-svc-p9btd [1.318089637s] May 3 12:08:33.180: INFO: Created: latency-svc-nlf8l May 3 12:08:33.198: INFO: Got endpoints: latency-svc-nlf8l [1.229619689s] May 3 12:08:33.224: INFO: Created: latency-svc-gnnfh May 3 12:08:33.283: INFO: Got endpoints: latency-svc-gnnfh [1.229405814s] May 3 12:08:33.314: INFO: Created: latency-svc-rp6s7 May 3 12:08:33.372: INFO: Got endpoints: latency-svc-rp6s7 [1.216844339s] May 3 12:08:33.463: INFO: Created: latency-svc-48zd2 May 3 12:08:33.465: INFO: Got endpoints: latency-svc-48zd2 [1.218599165s] May 3 12:08:33.498: INFO: Created: latency-svc-8qn5t May 3 12:08:33.535: INFO: Got endpoints: latency-svc-8qn5t [1.216650517s] May 3 12:08:33.613: INFO: Created: latency-svc-76klm May 3 12:08:33.663: INFO: Got endpoints: latency-svc-76klm [1.248695331s] May 3 12:08:33.663: INFO: Created: latency-svc-bmqk4 May 3 12:08:33.679: INFO: Got endpoints: latency-svc-bmqk4 [1.198520722s] May 3 12:08:33.812: INFO: Created: latency-svc-nqxpf May 3 12:08:34.347: INFO: Created: latency-svc-bl95b May 3 12:08:34.629: INFO: Got endpoints: latency-svc-nqxpf [2.069228122s] May 3 12:08:34.630: INFO: Created: latency-svc-mld2j May 3 12:08:34.751: INFO: Got endpoints: latency-svc-mld2j [2.007462978s] May 3 12:08:34.753: INFO: Got endpoints: latency-svc-bl95b [2.075183141s] May 3 12:08:34.942: INFO: Created: latency-svc-kt6z9 May 3 12:08:34.951: INFO: Got endpoints: latency-svc-kt6z9 [2.093816684s] May 3 12:08:35.011: INFO: Created: latency-svc-x9cbd May 3 12:08:35.030: INFO: Got endpoints: latency-svc-x9cbd [2.106258225s] May 3 12:08:35.164: INFO: Created: latency-svc-xjsg7 May 3 12:08:35.185: INFO: Got endpoints: latency-svc-xjsg7 [2.198593485s] May 3 12:08:35.224: INFO: Created: latency-svc-jhgdm May 3 12:08:35.246: INFO: Got endpoints: latency-svc-jhgdm [2.181197511s] May 3 12:08:35.367: INFO: Created: latency-svc-p9dfj May 3 12:08:35.420: INFO: Got endpoints: latency-svc-p9dfj [2.276698799s] May 3 12:08:35.606: INFO: Created: latency-svc-lfxpb May 3 12:08:35.631: INFO: Got endpoints: latency-svc-lfxpb [2.433190171s] May 3 12:08:35.967: INFO: Created: latency-svc-vwx6m May 3 12:08:35.971: INFO: Got endpoints: latency-svc-vwx6m [2.68835267s] May 3 12:08:36.158: INFO: Created: latency-svc-xtqbz May 3 12:08:36.206: INFO: Got endpoints: latency-svc-xtqbz [2.833979428s] May 3 12:08:36.343: INFO: Created: latency-svc-qs7hw May 3 12:08:36.367: INFO: Got endpoints: latency-svc-qs7hw [2.901982439s] May 3 12:08:36.409: INFO: Created: latency-svc-8jcsk May 3 12:08:36.428: INFO: Got endpoints: latency-svc-8jcsk [2.893427731s] May 3 12:08:36.480: INFO: Created: latency-svc-grjll May 3 12:08:36.495: INFO: Got endpoints: latency-svc-grjll [2.83182334s] May 3 12:08:36.549: INFO: Created: latency-svc-znkxt May 3 12:08:36.567: INFO: Got endpoints: latency-svc-znkxt [2.887653346s] May 3 12:08:36.638: INFO: Created: latency-svc-rkpfb May 3 12:08:36.670: INFO: Got endpoints: latency-svc-rkpfb [2.040573215s] May 3 12:08:36.710: INFO: Created: latency-svc-5tjdg May 3 12:08:36.987: INFO: Got endpoints: latency-svc-5tjdg [2.23661262s] May 3 12:08:37.157: INFO: Created: latency-svc-drp4t May 3 12:08:37.163: INFO: Got endpoints: latency-svc-drp4t [2.409696172s] May 3 12:08:37.203: INFO: Created: latency-svc-8l8q8 May 3 12:08:37.216: INFO: Got endpoints: latency-svc-8l8q8 [2.264696121s] May 3 12:08:37.275: INFO: Created: latency-svc-tjpjv May 3 12:08:37.291: INFO: Got endpoints: latency-svc-tjpjv [2.2616564s] May 3 12:08:37.493: INFO: Created: latency-svc-ksm9c May 3 12:08:37.522: INFO: Got endpoints: latency-svc-ksm9c [2.336830484s] May 3 12:08:37.636: INFO: Created: latency-svc-zs92l May 3 12:08:37.639: INFO: Got endpoints: latency-svc-zs92l [2.392551555s] May 3 12:08:37.843: INFO: Created: latency-svc-kpnsg May 3 12:08:37.924: INFO: Got endpoints: latency-svc-kpnsg [2.504459747s] May 3 12:08:38.118: INFO: Created: latency-svc-cmv86 May 3 12:08:38.241: INFO: Got endpoints: latency-svc-cmv86 [2.610482192s] May 3 12:08:38.267: INFO: Created: latency-svc-m8qx2 May 3 12:08:38.296: INFO: Got endpoints: latency-svc-m8qx2 [2.325138977s] May 3 12:08:38.483: INFO: Created: latency-svc-7qs5j May 3 12:08:38.936: INFO: Got endpoints: latency-svc-7qs5j [2.729474818s] May 3 12:08:38.980: INFO: Created: latency-svc-phvwx May 3 12:08:39.242: INFO: Got endpoints: latency-svc-phvwx [2.874671522s] May 3 12:08:39.523: INFO: Created: latency-svc-mdqhx May 3 12:08:39.585: INFO: Got endpoints: latency-svc-mdqhx [3.157050737s] May 3 12:08:39.840: INFO: Created: latency-svc-tr6rw May 3 12:08:39.915: INFO: Got endpoints: latency-svc-tr6rw [3.420252526s] May 3 12:08:40.116: INFO: Created: latency-svc-5kdh4 May 3 12:08:40.289: INFO: Got endpoints: latency-svc-5kdh4 [3.722350787s] May 3 12:08:40.325: INFO: Created: latency-svc-w72nw May 3 12:08:40.335: INFO: Got endpoints: latency-svc-w72nw [3.665422797s] May 3 12:08:40.385: INFO: Created: latency-svc-dr758 May 3 12:08:40.462: INFO: Got endpoints: latency-svc-dr758 [3.474768939s] May 3 12:08:40.487: INFO: Created: latency-svc-w7tld May 3 12:08:40.504: INFO: Got endpoints: latency-svc-w7tld [3.340771707s] May 3 12:08:40.548: INFO: Created: latency-svc-56dvp May 3 12:08:40.600: INFO: Got endpoints: latency-svc-56dvp [3.383905801s] May 3 12:08:40.607: INFO: Created: latency-svc-zgznd May 3 12:08:40.624: INFO: Got endpoints: latency-svc-zgznd [3.332735385s] May 3 12:08:40.650: INFO: Created: latency-svc-fqscf May 3 12:08:40.667: INFO: Got endpoints: latency-svc-fqscf [3.144666568s] May 3 12:08:40.692: INFO: Created: latency-svc-tzkgj May 3 12:08:40.738: INFO: Got endpoints: latency-svc-tzkgj [3.099173731s] May 3 12:08:40.750: INFO: Created: latency-svc-w6g6w May 3 12:08:40.775: INFO: Got endpoints: latency-svc-w6g6w [2.850950356s] May 3 12:08:40.805: INFO: Created: latency-svc-cv2qc May 3 12:08:40.818: INFO: Got endpoints: latency-svc-cv2qc [2.576251314s] May 3 12:08:40.888: INFO: Created: latency-svc-4mlz8 May 3 12:08:40.891: INFO: Got endpoints: latency-svc-4mlz8 [2.59419884s] May 3 12:08:40.985: INFO: Created: latency-svc-9tlmx May 3 12:08:41.043: INFO: Got endpoints: latency-svc-9tlmx [2.107551777s] May 3 12:08:41.059: INFO: Created: latency-svc-26dxd May 3 12:08:41.064: INFO: Got endpoints: latency-svc-26dxd [1.822337774s] May 3 12:08:41.100: INFO: Created: latency-svc-7wjm8 May 3 12:08:41.119: INFO: Got endpoints: latency-svc-7wjm8 [1.533854284s] May 3 12:08:41.183: INFO: Created: latency-svc-cs7x5 May 3 12:08:41.204: INFO: Got endpoints: latency-svc-cs7x5 [1.288588124s] May 3 12:08:41.236: INFO: Created: latency-svc-r84dm May 3 12:08:41.258: INFO: Got endpoints: latency-svc-r84dm [968.381185ms] May 3 12:08:41.325: INFO: Created: latency-svc-wps6g May 3 12:08:41.330: INFO: Got endpoints: latency-svc-wps6g [994.551125ms] May 3 12:08:41.389: INFO: Created: latency-svc-css2v May 3 12:08:41.414: INFO: Got endpoints: latency-svc-css2v [952.050173ms] May 3 12:08:41.469: INFO: Created: latency-svc-4qkwd May 3 12:08:41.471: INFO: Got endpoints: latency-svc-4qkwd [966.996764ms] May 3 12:08:41.500: INFO: Created: latency-svc-gtldh May 3 12:08:41.518: INFO: Got endpoints: latency-svc-gtldh [917.907539ms] May 3 12:08:41.562: INFO: Created: latency-svc-l5rv4 May 3 12:08:41.600: INFO: Got endpoints: latency-svc-l5rv4 [975.440499ms] May 3 12:08:41.628: INFO: Created: latency-svc-m48b6 May 3 12:08:41.686: INFO: Got endpoints: latency-svc-m48b6 [1.018864713s] May 3 12:08:41.791: INFO: Created: latency-svc-tm2nl May 3 12:08:41.819: INFO: Got endpoints: latency-svc-tm2nl [1.081510458s] May 3 12:08:41.857: INFO: Created: latency-svc-s7v7m May 3 12:08:41.872: INFO: Got endpoints: latency-svc-s7v7m [1.096459695s] May 3 12:08:41.935: INFO: Created: latency-svc-9ksg7 May 3 12:08:41.939: INFO: Got endpoints: latency-svc-9ksg7 [1.120758856s] May 3 12:08:41.988: INFO: Created: latency-svc-c4dw9 May 3 12:08:42.005: INFO: Got endpoints: latency-svc-c4dw9 [1.114692968s] May 3 12:08:42.036: INFO: Created: latency-svc-7c95x May 3 12:08:42.085: INFO: Got endpoints: latency-svc-7c95x [1.041606408s] May 3 12:08:42.108: INFO: Created: latency-svc-4wtgd May 3 12:08:42.132: INFO: Got endpoints: latency-svc-4wtgd [1.067250867s] May 3 12:08:42.166: INFO: Created: latency-svc-w4mhs May 3 12:08:42.180: INFO: Got endpoints: latency-svc-w4mhs [1.060539062s] May 3 12:08:42.260: INFO: Created: latency-svc-vv5qr May 3 12:08:42.262: INFO: Got endpoints: latency-svc-vv5qr [1.058681811s] May 3 12:08:42.287: INFO: Created: latency-svc-4mwch May 3 12:08:42.301: INFO: Got endpoints: latency-svc-4mwch [1.042806608s] May 3 12:08:42.409: INFO: Created: latency-svc-2pjhv May 3 12:08:42.411: INFO: Got endpoints: latency-svc-2pjhv [1.081679287s] May 3 12:08:42.448: INFO: Created: latency-svc-h68xh May 3 12:08:42.482: INFO: Got endpoints: latency-svc-h68xh [1.067643011s] May 3 12:08:42.547: INFO: Created: latency-svc-4p8gn May 3 12:08:42.549: INFO: Got endpoints: latency-svc-4p8gn [1.078183808s] May 3 12:08:42.582: INFO: Created: latency-svc-ljk4g May 3 12:08:42.608: INFO: Got endpoints: latency-svc-ljk4g [1.090441806s] May 3 12:08:42.634: INFO: Created: latency-svc-6twsf May 3 12:08:42.684: INFO: Got endpoints: latency-svc-6twsf [1.084063761s] May 3 12:08:42.700: INFO: Created: latency-svc-rqs8n May 3 12:08:42.710: INFO: Got endpoints: latency-svc-rqs8n [1.024649752s] May 3 12:08:42.736: INFO: Created: latency-svc-pl8sd May 3 12:08:42.762: INFO: Got endpoints: latency-svc-pl8sd [942.91933ms] May 3 12:08:42.847: INFO: Created: latency-svc-v7x7w May 3 12:08:42.859: INFO: Got endpoints: latency-svc-v7x7w [987.23578ms] May 3 12:08:42.881: INFO: Created: latency-svc-bvzlv May 3 12:08:42.898: INFO: Got endpoints: latency-svc-bvzlv [959.607944ms] May 3 12:08:42.922: INFO: Created: latency-svc-zrx4h May 3 12:08:42.940: INFO: Got endpoints: latency-svc-zrx4h [934.71814ms] May 3 12:08:43.019: INFO: Created: latency-svc-8rrhs May 3 12:08:43.042: INFO: Got endpoints: latency-svc-8rrhs [957.307992ms] May 3 12:08:43.068: INFO: Created: latency-svc-lv9zb May 3 12:08:43.085: INFO: Got endpoints: latency-svc-lv9zb [953.786081ms] May 3 12:08:43.157: INFO: Created: latency-svc-v8lk4 May 3 12:08:43.159: INFO: Got endpoints: latency-svc-v8lk4 [978.990165ms] May 3 12:08:43.216: INFO: Created: latency-svc-nxdlq May 3 12:08:43.235: INFO: Got endpoints: latency-svc-nxdlq [972.82399ms] May 3 12:08:43.295: INFO: Created: latency-svc-dm5hz May 3 12:08:43.308: INFO: Got endpoints: latency-svc-dm5hz [1.007043284s] May 3 12:08:43.355: INFO: Created: latency-svc-dzwkw May 3 12:08:43.375: INFO: Got endpoints: latency-svc-dzwkw [963.704113ms] May 3 12:08:43.427: INFO: Created: latency-svc-q4zdq May 3 12:08:43.430: INFO: Got endpoints: latency-svc-q4zdq [947.77544ms] May 3 12:08:43.577: INFO: Created: latency-svc-xs468 May 3 12:08:43.580: INFO: Got endpoints: latency-svc-xs468 [1.031172663s] May 3 12:08:43.613: INFO: Created: latency-svc-gxs9l May 3 12:08:43.655: INFO: Got endpoints: latency-svc-gxs9l [1.046389108s] May 3 12:08:43.751: INFO: Created: latency-svc-84m8s May 3 12:08:43.846: INFO: Got endpoints: latency-svc-84m8s [1.161662412s] May 3 12:08:43.858: INFO: Created: latency-svc-42lpb May 3 12:08:43.900: INFO: Got endpoints: latency-svc-42lpb [1.189967413s] May 3 12:08:43.990: INFO: Created: latency-svc-4rbht May 3 12:08:44.007: INFO: Got endpoints: latency-svc-4rbht [1.244686823s] May 3 12:08:44.044: INFO: Created: latency-svc-d552p May 3 12:08:44.054: INFO: Got endpoints: latency-svc-d552p [1.194738295s] May 3 12:08:44.087: INFO: Created: latency-svc-6tsqx May 3 12:08:44.157: INFO: Got endpoints: latency-svc-6tsqx [1.258627048s] May 3 12:08:44.200: INFO: Created: latency-svc-vwjnt May 3 12:08:44.217: INFO: Got endpoints: latency-svc-vwjnt [1.27633849s] May 3 12:08:44.339: INFO: Created: latency-svc-7dplv May 3 12:08:44.349: INFO: Got endpoints: latency-svc-7dplv [1.306971647s] May 3 12:08:44.410: INFO: Created: latency-svc-vg2jv May 3 12:08:44.540: INFO: Got endpoints: latency-svc-vg2jv [1.454659668s] May 3 12:08:44.632: INFO: Created: latency-svc-bg8gc May 3 12:08:44.996: INFO: Got endpoints: latency-svc-bg8gc [1.837219627s] May 3 12:08:45.035: INFO: Created: latency-svc-rl2r6 May 3 12:08:45.169: INFO: Got endpoints: latency-svc-rl2r6 [1.933520118s] May 3 12:08:45.169: INFO: Created: latency-svc-rzd77 May 3 12:08:45.219: INFO: Got endpoints: latency-svc-rzd77 [1.911463235s] May 3 12:08:45.252: INFO: Created: latency-svc-n8ds7 May 3 12:08:45.348: INFO: Got endpoints: latency-svc-n8ds7 [1.973183458s] May 3 12:08:45.394: INFO: Created: latency-svc-2kzcn May 3 12:08:45.738: INFO: Got endpoints: latency-svc-2kzcn [2.30838344s] May 3 12:08:45.791: INFO: Created: latency-svc-tnl4d May 3 12:08:45.914: INFO: Got endpoints: latency-svc-tnl4d [2.333897915s] May 3 12:08:46.068: INFO: Created: latency-svc-zrdbj May 3 12:08:46.089: INFO: Got endpoints: latency-svc-zrdbj [2.434484783s] May 3 12:08:46.123: INFO: Created: latency-svc-wwfh2 May 3 12:08:46.131: INFO: Got endpoints: latency-svc-wwfh2 [2.285789487s] May 3 12:08:46.206: INFO: Created: latency-svc-lltd4 May 3 12:08:46.210: INFO: Got endpoints: latency-svc-lltd4 [2.309307947s] May 3 12:08:46.247: INFO: Created: latency-svc-4k5wc May 3 12:08:46.271: INFO: Got endpoints: latency-svc-4k5wc [2.263625106s] May 3 12:08:46.337: INFO: Created: latency-svc-np66b May 3 12:08:46.339: INFO: Got endpoints: latency-svc-np66b [2.285214739s] May 3 12:08:46.368: INFO: Created: latency-svc-l5562 May 3 12:08:46.385: INFO: Got endpoints: latency-svc-l5562 [2.228142942s] May 3 12:08:46.420: INFO: Created: latency-svc-tnfml May 3 12:08:46.427: INFO: Got endpoints: latency-svc-tnfml [2.210245384s] May 3 12:08:46.493: INFO: Created: latency-svc-qhnfq May 3 12:08:46.495: INFO: Got endpoints: latency-svc-qhnfq [2.145388038s] May 3 12:08:46.524: INFO: Created: latency-svc-nrqg9 May 3 12:08:46.530: INFO: Got endpoints: latency-svc-nrqg9 [1.989791641s] May 3 12:08:46.554: INFO: Created: latency-svc-2j9xg May 3 12:08:46.572: INFO: Got endpoints: latency-svc-2j9xg [1.576147065s] May 3 12:08:46.643: INFO: Created: latency-svc-zlz5k May 3 12:08:46.646: INFO: Got endpoints: latency-svc-zlz5k [1.476583661s] May 3 12:08:46.702: INFO: Created: latency-svc-6clfw May 3 12:08:46.711: INFO: Got endpoints: latency-svc-6clfw [1.491419455s] May 3 12:08:46.739: INFO: Created: latency-svc-hzf89 May 3 12:08:46.786: INFO: Got endpoints: latency-svc-hzf89 [1.437604626s] May 3 12:08:46.794: INFO: Created: latency-svc-mgdlg May 3 12:08:46.824: INFO: Got endpoints: latency-svc-mgdlg [1.085373847s] May 3 12:08:46.860: INFO: Created: latency-svc-rdmlt May 3 12:08:46.941: INFO: Got endpoints: latency-svc-rdmlt [1.027143397s] May 3 12:08:46.954: INFO: Created: latency-svc-vcnfh May 3 12:08:46.972: INFO: Got endpoints: latency-svc-vcnfh [882.40146ms] May 3 12:08:46.972: INFO: Latencies: [153.851699ms 182.492524ms 213.426259ms 284.208968ms 344.08736ms 476.202554ms 745.55123ms 776.742752ms 857.057238ms 857.774339ms 858.861889ms 859.436348ms 863.414627ms 865.12206ms 865.211886ms 865.851187ms 868.597082ms 871.029408ms 873.935292ms 874.304238ms 881.630067ms 882.40146ms 883.282492ms 907.419439ms 917.907539ms 934.71814ms 942.777003ms 942.91933ms 947.77544ms 952.050173ms 953.786081ms 957.307992ms 959.607944ms 961.98845ms 963.704113ms 966.996764ms 968.381185ms 972.82399ms 975.440499ms 977.393738ms 978.990165ms 980.403314ms 984.043802ms 987.23578ms 994.551125ms 996.297871ms 1.007043284s 1.018864713s 1.024649752s 1.027143397s 1.031172663s 1.036212988s 1.039819687s 1.041606408s 1.042806608s 1.04483084s 1.046389108s 1.056254602s 1.058681811s 1.060539062s 1.067250867s 1.067643011s 1.078183808s 1.081510458s 1.081679287s 1.084063761s 1.085373847s 1.090441806s 1.090654552s 1.096459695s 1.100219965s 1.114692968s 1.11606555s 1.120758856s 1.15878142s 1.161662412s 1.189967413s 1.190241494s 1.191734972s 1.194738295s 1.198520722s 1.208185278s 1.216650517s 1.216844339s 1.218599165s 1.224648861s 1.229405814s 1.229619689s 1.235429934s 1.238994865s 1.244686823s 1.248695331s 1.258627048s 1.263946766s 1.27633849s 1.27871504s 1.282510348s 1.288588124s 1.306971647s 1.318089637s 1.356707018s 1.370158093s 1.399043233s 1.418593868s 1.434336324s 1.437604626s 1.447162978s 1.450224479s 1.454659668s 1.471514055s 1.476583661s 1.491419455s 1.493942422s 1.515280629s 1.533236797s 1.533854284s 1.536946015s 1.553897736s 1.576147065s 1.644634197s 1.822337774s 1.837219627s 1.911463235s 1.933520118s 1.936868245s 1.964495738s 1.973183458s 1.977834116s 1.989791641s 2.007462978s 2.040573215s 2.069228122s 2.075183141s 2.093816684s 2.106258225s 2.107551777s 2.145388038s 2.181197511s 2.198593485s 2.210245384s 2.228142942s 2.234313768s 2.23661262s 2.2616564s 2.263625106s 2.264696121s 2.272399499s 2.276698799s 2.285214739s 2.285789487s 2.30838344s 2.309307947s 2.325138977s 2.333897915s 2.336830484s 2.342629643s 2.392551555s 2.404827916s 2.409696172s 2.433190171s 2.434484783s 2.504459747s 2.576251314s 2.586202303s 2.59419884s 2.610482192s 2.68835267s 2.706055142s 2.729474818s 2.83182334s 2.833979428s 2.850950356s 2.874671522s 2.887653346s 2.893427731s 2.893530266s 2.901982439s 2.927015408s 2.967922304s 3.024548791s 3.099173731s 3.144666568s 3.157050737s 3.194416344s 3.219697484s 3.257490228s 3.332735385s 3.340771707s 3.383905801s 3.420252526s 3.474768939s 3.665422797s 3.722350787s 3.723858311s 3.731867249s 3.736187093s 3.750377704s 3.760254585s 3.760820107s 3.876842911s] May 3 12:08:46.972: INFO: 50 %ile: 1.356707018s May 3 12:08:46.972: INFO: 90 %ile: 3.099173731s May 3 12:08:46.972: INFO: 99 %ile: 3.760820107s May 3 12:08:46.972: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:08:46.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-7jhvs" for this suite. May 3 12:09:24.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:09:25.021: INFO: namespace: e2e-tests-svc-latency-7jhvs, resource: bindings, ignored listing per whitelist May 3 12:09:25.070: INFO: namespace e2e-tests-svc-latency-7jhvs deletion completed in 38.087179274s • [SLOW TEST:65.632 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:09:25.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-mj4g STEP: Creating a pod to test atomic-volume-subpath May 3 12:09:25.608: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mj4g" in namespace "e2e-tests-subpath-5hmmm" to be "success or failure" May 3 12:09:25.776: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Pending", Reason="", readiness=false. Elapsed: 168.161455ms May 3 12:09:27.781: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172866432s May 3 12:09:29.832: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22416671s May 3 12:09:31.865: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256537668s May 3 12:09:33.883: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Running", Reason="", readiness=false. Elapsed: 8.274495934s May 3 12:09:35.887: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Running", Reason="", readiness=false. Elapsed: 10.278485792s May 3 12:09:37.891: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Running", Reason="", readiness=false. Elapsed: 12.282857818s May 3 12:09:39.955: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Running", Reason="", readiness=false. Elapsed: 14.346767498s May 3 12:09:41.962: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Running", Reason="", readiness=false. Elapsed: 16.353400501s May 3 12:09:43.965: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Running", Reason="", readiness=false. Elapsed: 18.357118168s May 3 12:09:45.970: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Running", Reason="", readiness=false. Elapsed: 20.361381667s May 3 12:09:47.974: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Running", Reason="", readiness=false. Elapsed: 22.366020256s May 3 12:09:49.979: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Running", Reason="", readiness=false. Elapsed: 24.370412521s May 3 12:09:51.982: INFO: Pod "pod-subpath-test-projected-mj4g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.374144519s STEP: Saw pod success May 3 12:09:51.982: INFO: Pod "pod-subpath-test-projected-mj4g" satisfied condition "success or failure" May 3 12:09:52.010: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-mj4g container test-container-subpath-projected-mj4g: STEP: delete the pod May 3 12:09:52.028: INFO: Waiting for pod pod-subpath-test-projected-mj4g to disappear May 3 12:09:52.033: INFO: Pod pod-subpath-test-projected-mj4g no longer exists STEP: Deleting pod pod-subpath-test-projected-mj4g May 3 12:09:52.033: INFO: Deleting pod "pod-subpath-test-projected-mj4g" in namespace "e2e-tests-subpath-5hmmm" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:09:52.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-5hmmm" for this suite. May 3 12:09:58.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:09:58.137: INFO: namespace: e2e-tests-subpath-5hmmm, resource: bindings, ignored listing per whitelist May 3 12:09:58.137: INFO: namespace e2e-tests-subpath-5hmmm deletion completed in 6.09979341s • [SLOW TEST:33.067 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:09:58.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 3 12:09:58.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-2lfm7' May 3 12:10:00.690: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 3 12:10:00.690: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 3 12:10:04.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-2lfm7' May 3 12:10:04.904: INFO: stderr: "" May 3 12:10:04.904: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:10:04.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2lfm7" for this suite. May 3 12:10:27.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:10:27.088: INFO: namespace: e2e-tests-kubectl-2lfm7, resource: bindings, ignored listing per whitelist May 3 12:10:27.138: INFO: namespace e2e-tests-kubectl-2lfm7 deletion completed in 22.230073494s • [SLOW TEST:29.000 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:10:27.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-132c8e80-8d37-11ea-b78d-0242ac110017 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-132c8e80-8d37-11ea-b78d-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:11:35.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lfq9d" for this suite. May 3 12:11:57.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:11:57.778: INFO: namespace: e2e-tests-configmap-lfq9d, resource: bindings, ignored listing per whitelist May 3 12:11:57.831: INFO: namespace e2e-tests-configmap-lfq9d deletion completed in 22.203985585s • [SLOW TEST:90.693 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:11:57.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-4wjw STEP: Creating a pod to test atomic-volume-subpath May 3 12:11:57.954: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4wjw" in namespace "e2e-tests-subpath-kvglv" to be "success or failure" May 3 12:11:57.957: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.530025ms May 3 12:11:59.962: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007823395s May 3 12:12:01.965: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01157465s May 3 12:12:03.969: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015719566s May 3 12:12:05.973: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Running", Reason="", readiness=false. Elapsed: 8.019605408s May 3 12:12:07.978: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Running", Reason="", readiness=false. Elapsed: 10.024125685s May 3 12:12:09.983: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Running", Reason="", readiness=false. Elapsed: 12.028864902s May 3 12:12:11.987: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Running", Reason="", readiness=false. Elapsed: 14.033377906s May 3 12:12:13.992: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Running", Reason="", readiness=false. Elapsed: 16.037786375s May 3 12:12:15.997: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Running", Reason="", readiness=false. Elapsed: 18.043134109s May 3 12:12:18.001: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Running", Reason="", readiness=false. Elapsed: 20.047474723s May 3 12:12:20.010: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Running", Reason="", readiness=false. Elapsed: 22.056541771s May 3 12:12:22.039: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Running", Reason="", readiness=false. Elapsed: 24.085022895s May 3 12:12:24.043: INFO: Pod "pod-subpath-test-configmap-4wjw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.089444793s STEP: Saw pod success May 3 12:12:24.043: INFO: Pod "pod-subpath-test-configmap-4wjw" satisfied condition "success or failure" May 3 12:12:24.047: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-4wjw container test-container-subpath-configmap-4wjw: STEP: delete the pod May 3 12:12:24.331: INFO: Waiting for pod pod-subpath-test-configmap-4wjw to disappear May 3 12:12:24.426: INFO: Pod pod-subpath-test-configmap-4wjw no longer exists STEP: Deleting pod pod-subpath-test-configmap-4wjw May 3 12:12:24.426: INFO: Deleting pod "pod-subpath-test-configmap-4wjw" in namespace "e2e-tests-subpath-kvglv" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:12:24.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-kvglv" for this suite. May 3 12:12:30.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:12:30.570: INFO: namespace: e2e-tests-subpath-kvglv, resource: bindings, ignored listing per whitelist May 3 12:12:30.603: INFO: namespace e2e-tests-subpath-kvglv deletion completed in 6.114410627s • [SLOW TEST:32.772 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:12:30.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-5cbf4a4f-8d37-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 12:12:30.780: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5cbff9b8-8d37-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-dw4sr" to be "success or failure" May 3 12:12:30.792: INFO: Pod "pod-projected-secrets-5cbff9b8-8d37-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.136769ms May 3 12:12:32.807: INFO: Pod "pod-projected-secrets-5cbff9b8-8d37-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027280116s May 3 12:12:34.811: INFO: Pod "pod-projected-secrets-5cbff9b8-8d37-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031254355s STEP: Saw pod success May 3 12:12:34.811: INFO: Pod "pod-projected-secrets-5cbff9b8-8d37-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:12:34.814: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-5cbff9b8-8d37-11ea-b78d-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 3 12:12:34.827: INFO: Waiting for pod pod-projected-secrets-5cbff9b8-8d37-11ea-b78d-0242ac110017 to disappear May 3 12:12:34.838: INFO: Pod pod-projected-secrets-5cbff9b8-8d37-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:12:34.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dw4sr" for this suite. May 3 12:12:40.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:12:40.899: INFO: namespace: e2e-tests-projected-dw4sr, resource: bindings, ignored listing per whitelist May 3 12:12:40.950: INFO: namespace e2e-tests-projected-dw4sr deletion completed in 6.109240689s • [SLOW TEST:10.347 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:12:40.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 3 12:12:41.062: INFO: Waiting up to 5m0s for pod "pod-62e95601-8d37-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-5wwcg" to be "success or failure" May 3 12:12:41.083: INFO: Pod "pod-62e95601-8d37-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 20.108486ms May 3 12:12:43.086: INFO: Pod "pod-62e95601-8d37-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023643939s May 3 12:12:45.090: INFO: Pod "pod-62e95601-8d37-11ea-b78d-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.027152259s May 3 12:12:47.094: INFO: Pod "pod-62e95601-8d37-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031052973s STEP: Saw pod success May 3 12:12:47.094: INFO: Pod "pod-62e95601-8d37-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:12:47.096: INFO: Trying to get logs from node hunter-worker pod pod-62e95601-8d37-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 12:12:47.140: INFO: Waiting for pod pod-62e95601-8d37-11ea-b78d-0242ac110017 to disappear May 3 12:12:47.199: INFO: Pod pod-62e95601-8d37-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:12:47.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5wwcg" for this suite. May 3 12:12:53.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:12:53.298: INFO: namespace: e2e-tests-emptydir-5wwcg, resource: bindings, ignored listing per whitelist May 3 12:12:53.317: INFO: namespace e2e-tests-emptydir-5wwcg deletion completed in 6.114527132s • [SLOW TEST:12.367 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:12:53.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 3 12:12:53.441: INFO: Waiting up to 5m0s for pod "var-expansion-6a4a1f7c-8d37-11ea-b78d-0242ac110017" in namespace "e2e-tests-var-expansion-z56sb" to be "success or failure" May 3 12:12:53.445: INFO: Pod "var-expansion-6a4a1f7c-8d37-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.823896ms May 3 12:12:55.450: INFO: Pod "var-expansion-6a4a1f7c-8d37-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008111829s May 3 12:12:57.454: INFO: Pod "var-expansion-6a4a1f7c-8d37-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012742039s STEP: Saw pod success May 3 12:12:57.454: INFO: Pod "var-expansion-6a4a1f7c-8d37-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:12:57.457: INFO: Trying to get logs from node hunter-worker pod var-expansion-6a4a1f7c-8d37-11ea-b78d-0242ac110017 container dapi-container: STEP: delete the pod May 3 12:12:57.482: INFO: Waiting for pod var-expansion-6a4a1f7c-8d37-11ea-b78d-0242ac110017 to disappear May 3 12:12:57.518: INFO: Pod var-expansion-6a4a1f7c-8d37-11ea-b78d-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:12:57.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-z56sb" for this suite. May 3 12:13:03.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:13:03.687: INFO: namespace: e2e-tests-var-expansion-z56sb, resource: bindings, ignored listing per whitelist May 3 12:13:03.696: INFO: namespace e2e-tests-var-expansion-z56sb deletion completed in 6.175002463s • [SLOW TEST:10.378 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:13:03.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 3 12:13:11.086: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:13:12.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-b6l2l" for this suite. May 3 12:13:36.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:13:36.396: INFO: namespace: e2e-tests-replicaset-b6l2l, resource: bindings, ignored listing per whitelist May 3 12:13:36.440: INFO: namespace e2e-tests-replicaset-b6l2l deletion completed in 24.117583004s • [SLOW TEST:32.743 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:13:36.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 3 12:13:36.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-9w5zw' May 3 12:13:36.657: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 3 12:13:36.657: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 3 12:13:36.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-9w5zw' May 3 12:13:36.758: INFO: stderr: "" May 3 12:13:36.758: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:13:36.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9w5zw" for this suite. May 3 12:13:58.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:13:58.823: INFO: namespace: e2e-tests-kubectl-9w5zw, resource: bindings, ignored listing per whitelist May 3 12:13:58.873: INFO: namespace e2e-tests-kubectl-9w5zw deletion completed in 22.097983293s • [SLOW TEST:22.433 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:13:58.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:13:58.967: INFO: Creating ReplicaSet my-hostname-basic-915c3420-8d37-11ea-b78d-0242ac110017 May 3 12:13:58.986: INFO: Pod name my-hostname-basic-915c3420-8d37-11ea-b78d-0242ac110017: Found 0 pods out of 1 May 3 12:14:03.990: INFO: Pod name my-hostname-basic-915c3420-8d37-11ea-b78d-0242ac110017: Found 1 pods out of 1 May 3 12:14:03.990: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-915c3420-8d37-11ea-b78d-0242ac110017" is running May 3 12:14:03.992: INFO: Pod "my-hostname-basic-915c3420-8d37-11ea-b78d-0242ac110017-62nw4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-03 12:13:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-03 12:14:01 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-03 12:14:01 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-03 12:13:58 +0000 UTC Reason: Message:}]) May 3 12:14:03.992: INFO: Trying to dial the pod May 3 12:14:09.007: INFO: Controller my-hostname-basic-915c3420-8d37-11ea-b78d-0242ac110017: Got expected result from replica 1 [my-hostname-basic-915c3420-8d37-11ea-b78d-0242ac110017-62nw4]: "my-hostname-basic-915c3420-8d37-11ea-b78d-0242ac110017-62nw4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:14:09.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-kpljt" for this suite. May 3 12:14:15.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:14:15.042: INFO: namespace: e2e-tests-replicaset-kpljt, resource: bindings, ignored listing per whitelist May 3 12:14:15.108: INFO: namespace e2e-tests-replicaset-kpljt deletion completed in 6.093622872s • [SLOW TEST:16.235 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:14:15.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-wr4ff [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 3 12:14:15.301: INFO: Found 0 stateful pods, waiting for 3 May 3 12:14:25.306: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 3 12:14:25.306: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 3 12:14:25.306: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 3 12:14:35.307: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 3 12:14:35.307: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 3 12:14:35.307: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 3 12:14:35.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wr4ff ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 12:14:35.571: INFO: stderr: "I0503 12:14:35.456028 1987 log.go:172] (0xc00016c6e0) (0xc0006cc640) Create stream\nI0503 12:14:35.456097 1987 log.go:172] (0xc00016c6e0) (0xc0006cc640) Stream added, broadcasting: 1\nI0503 12:14:35.458976 1987 log.go:172] (0xc00016c6e0) Reply frame received for 1\nI0503 12:14:35.459023 1987 log.go:172] (0xc00016c6e0) (0xc0007dad20) Create stream\nI0503 12:14:35.459037 1987 log.go:172] (0xc00016c6e0) (0xc0007dad20) Stream added, broadcasting: 3\nI0503 12:14:35.459811 1987 log.go:172] (0xc00016c6e0) Reply frame received for 3\nI0503 12:14:35.459850 1987 log.go:172] (0xc00016c6e0) (0xc000784000) Create stream\nI0503 12:14:35.459868 1987 log.go:172] (0xc00016c6e0) (0xc000784000) Stream added, broadcasting: 5\nI0503 12:14:35.460498 1987 log.go:172] (0xc00016c6e0) Reply frame received for 5\nI0503 12:14:35.564434 1987 log.go:172] (0xc00016c6e0) Data frame received for 3\nI0503 12:14:35.564482 1987 log.go:172] (0xc0007dad20) (3) Data frame handling\nI0503 12:14:35.564531 1987 log.go:172] (0xc0007dad20) (3) Data frame sent\nI0503 12:14:35.564556 1987 log.go:172] (0xc00016c6e0) Data frame received for 3\nI0503 12:14:35.564580 1987 log.go:172] (0xc0007dad20) (3) Data frame handling\nI0503 12:14:35.564607 1987 log.go:172] (0xc00016c6e0) Data frame received for 5\nI0503 12:14:35.564623 1987 log.go:172] (0xc000784000) (5) Data frame handling\nI0503 12:14:35.567058 1987 log.go:172] (0xc00016c6e0) Data frame received for 1\nI0503 12:14:35.567094 1987 log.go:172] (0xc0006cc640) (1) Data frame handling\nI0503 12:14:35.567113 1987 log.go:172] (0xc0006cc640) (1) Data frame sent\nI0503 12:14:35.567130 1987 log.go:172] (0xc00016c6e0) (0xc0006cc640) Stream removed, broadcasting: 1\nI0503 12:14:35.567164 1987 log.go:172] (0xc00016c6e0) Go away received\nI0503 12:14:35.567474 1987 log.go:172] (0xc00016c6e0) (0xc0006cc640) Stream removed, broadcasting: 1\nI0503 12:14:35.567504 1987 log.go:172] (0xc00016c6e0) (0xc0007dad20) Stream removed, broadcasting: 3\nI0503 12:14:35.567517 1987 log.go:172] (0xc00016c6e0) (0xc000784000) Stream removed, broadcasting: 5\n" May 3 12:14:35.571: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 12:14:35.571: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 3 12:14:45.617: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 3 12:14:55.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wr4ff ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 3 12:14:55.858: INFO: stderr: "I0503 12:14:55.803667 2011 log.go:172] (0xc00014c840) (0xc000768640) Create stream\nI0503 12:14:55.803729 2011 log.go:172] (0xc00014c840) (0xc000768640) Stream added, broadcasting: 1\nI0503 12:14:55.806355 2011 log.go:172] (0xc00014c840) Reply frame received for 1\nI0503 12:14:55.806400 2011 log.go:172] (0xc00014c840) (0xc0007686e0) Create stream\nI0503 12:14:55.806414 2011 log.go:172] (0xc00014c840) (0xc0007686e0) Stream added, broadcasting: 3\nI0503 12:14:55.807450 2011 log.go:172] (0xc00014c840) Reply frame received for 3\nI0503 12:14:55.807473 2011 log.go:172] (0xc00014c840) (0xc000768780) Create stream\nI0503 12:14:55.807482 2011 log.go:172] (0xc00014c840) (0xc000768780) Stream added, broadcasting: 5\nI0503 12:14:55.808340 2011 log.go:172] (0xc00014c840) Reply frame received for 5\nI0503 12:14:55.853351 2011 log.go:172] (0xc00014c840) Data frame received for 3\nI0503 12:14:55.853391 2011 log.go:172] (0xc0007686e0) (3) Data frame handling\nI0503 12:14:55.853401 2011 log.go:172] (0xc0007686e0) (3) Data frame sent\nI0503 12:14:55.853410 2011 log.go:172] (0xc00014c840) Data frame received for 3\nI0503 12:14:55.853416 2011 log.go:172] (0xc0007686e0) (3) Data frame handling\nI0503 12:14:55.853427 2011 log.go:172] (0xc00014c840) Data frame received for 5\nI0503 12:14:55.853434 2011 log.go:172] (0xc000768780) (5) Data frame handling\nI0503 12:14:55.854794 2011 log.go:172] (0xc00014c840) Data frame received for 1\nI0503 12:14:55.854818 2011 log.go:172] (0xc000768640) (1) Data frame handling\nI0503 12:14:55.854833 2011 log.go:172] (0xc000768640) (1) Data frame sent\nI0503 12:14:55.854844 2011 log.go:172] (0xc00014c840) (0xc000768640) Stream removed, broadcasting: 1\nI0503 12:14:55.854867 2011 log.go:172] (0xc00014c840) Go away received\nI0503 12:14:55.855061 2011 log.go:172] (0xc00014c840) (0xc000768640) Stream removed, broadcasting: 1\nI0503 12:14:55.855079 2011 log.go:172] (0xc00014c840) (0xc0007686e0) Stream removed, broadcasting: 3\nI0503 12:14:55.855086 2011 log.go:172] (0xc00014c840) (0xc000768780) Stream removed, broadcasting: 5\n" May 3 12:14:55.858: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 3 12:14:55.858: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 3 12:15:06.080: INFO: Waiting for StatefulSet e2e-tests-statefulset-wr4ff/ss2 to complete update May 3 12:15:06.080: INFO: Waiting for Pod e2e-tests-statefulset-wr4ff/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 3 12:15:06.080: INFO: Waiting for Pod e2e-tests-statefulset-wr4ff/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 3 12:15:16.088: INFO: Waiting for StatefulSet e2e-tests-statefulset-wr4ff/ss2 to complete update May 3 12:15:16.089: INFO: Waiting for Pod e2e-tests-statefulset-wr4ff/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 3 12:15:26.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wr4ff ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 3 12:15:26.369: INFO: stderr: "I0503 12:15:26.223083 2033 log.go:172] (0xc000138840) (0xc0005b1400) Create stream\nI0503 12:15:26.223153 2033 log.go:172] (0xc000138840) (0xc0005b1400) Stream added, broadcasting: 1\nI0503 12:15:26.232009 2033 log.go:172] (0xc000138840) Reply frame received for 1\nI0503 12:15:26.232082 2033 log.go:172] (0xc000138840) (0xc0002f8000) Create stream\nI0503 12:15:26.232095 2033 log.go:172] (0xc000138840) (0xc0002f8000) Stream added, broadcasting: 3\nI0503 12:15:26.233603 2033 log.go:172] (0xc000138840) Reply frame received for 3\nI0503 12:15:26.233669 2033 log.go:172] (0xc000138840) (0xc0002f80a0) Create stream\nI0503 12:15:26.233708 2033 log.go:172] (0xc000138840) (0xc0002f80a0) Stream added, broadcasting: 5\nI0503 12:15:26.236346 2033 log.go:172] (0xc000138840) Reply frame received for 5\nI0503 12:15:26.362825 2033 log.go:172] (0xc000138840) Data frame received for 3\nI0503 12:15:26.362863 2033 log.go:172] (0xc0002f8000) (3) Data frame handling\nI0503 12:15:26.362888 2033 log.go:172] (0xc0002f8000) (3) Data frame sent\nI0503 12:15:26.362900 2033 log.go:172] (0xc000138840) Data frame received for 3\nI0503 12:15:26.362909 2033 log.go:172] (0xc0002f8000) (3) Data frame handling\nI0503 12:15:26.363113 2033 log.go:172] (0xc000138840) Data frame received for 5\nI0503 12:15:26.363149 2033 log.go:172] (0xc0002f80a0) (5) Data frame handling\nI0503 12:15:26.365657 2033 log.go:172] (0xc000138840) Data frame received for 1\nI0503 12:15:26.365693 2033 log.go:172] (0xc0005b1400) (1) Data frame handling\nI0503 12:15:26.365732 2033 log.go:172] (0xc0005b1400) (1) Data frame sent\nI0503 12:15:26.365774 2033 log.go:172] (0xc000138840) (0xc0005b1400) Stream removed, broadcasting: 1\nI0503 12:15:26.365823 2033 log.go:172] (0xc000138840) Go away received\nI0503 12:15:26.366014 2033 log.go:172] (0xc000138840) (0xc0005b1400) Stream removed, broadcasting: 1\nI0503 12:15:26.366038 2033 log.go:172] (0xc000138840) (0xc0002f8000) Stream removed, broadcasting: 3\nI0503 12:15:26.366052 2033 log.go:172] (0xc000138840) (0xc0002f80a0) Stream removed, broadcasting: 5\n" May 3 12:15:26.369: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 3 12:15:26.369: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 3 12:15:36.455: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 3 12:15:46.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wr4ff ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 3 12:15:46.698: INFO: stderr: "I0503 12:15:46.601072 2055 log.go:172] (0xc0003444d0) (0xc0005f5360) Create stream\nI0503 12:15:46.601259 2055 log.go:172] (0xc0003444d0) (0xc0005f5360) Stream added, broadcasting: 1\nI0503 12:15:46.603212 2055 log.go:172] (0xc0003444d0) Reply frame received for 1\nI0503 12:15:46.603246 2055 log.go:172] (0xc0003444d0) (0xc0005f5400) Create stream\nI0503 12:15:46.603260 2055 log.go:172] (0xc0003444d0) (0xc0005f5400) Stream added, broadcasting: 3\nI0503 12:15:46.604209 2055 log.go:172] (0xc0003444d0) Reply frame received for 3\nI0503 12:15:46.604242 2055 log.go:172] (0xc0003444d0) (0xc0006ca000) Create stream\nI0503 12:15:46.604251 2055 log.go:172] (0xc0003444d0) (0xc0006ca000) Stream added, broadcasting: 5\nI0503 12:15:46.605001 2055 log.go:172] (0xc0003444d0) Reply frame received for 5\nI0503 12:15:46.693355 2055 log.go:172] (0xc0003444d0) Data frame received for 5\nI0503 12:15:46.693406 2055 log.go:172] (0xc0006ca000) (5) Data frame handling\nI0503 12:15:46.693428 2055 log.go:172] (0xc0003444d0) Data frame received for 3\nI0503 12:15:46.693435 2055 log.go:172] (0xc0005f5400) (3) Data frame handling\nI0503 12:15:46.693446 2055 log.go:172] (0xc0005f5400) (3) Data frame sent\nI0503 12:15:46.693452 2055 log.go:172] (0xc0003444d0) Data frame received for 3\nI0503 12:15:46.693459 2055 log.go:172] (0xc0005f5400) (3) Data frame handling\nI0503 12:15:46.694700 2055 log.go:172] (0xc0003444d0) Data frame received for 1\nI0503 12:15:46.694732 2055 log.go:172] (0xc0005f5360) (1) Data frame handling\nI0503 12:15:46.694741 2055 log.go:172] (0xc0005f5360) (1) Data frame sent\nI0503 12:15:46.694752 2055 log.go:172] (0xc0003444d0) (0xc0005f5360) Stream removed, broadcasting: 1\nI0503 12:15:46.694810 2055 log.go:172] (0xc0003444d0) Go away received\nI0503 12:15:46.694906 2055 log.go:172] (0xc0003444d0) (0xc0005f5360) Stream removed, broadcasting: 1\nI0503 12:15:46.694919 2055 log.go:172] (0xc0003444d0) (0xc0005f5400) Stream removed, broadcasting: 3\nI0503 12:15:46.694926 2055 log.go:172] (0xc0003444d0) (0xc0006ca000) Stream removed, broadcasting: 5\n" May 3 12:15:46.698: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 3 12:15:46.698: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 3 12:16:06.720: INFO: Waiting for StatefulSet e2e-tests-statefulset-wr4ff/ss2 to complete update May 3 12:16:06.720: INFO: Waiting for Pod e2e-tests-statefulset-wr4ff/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 3 12:16:16.729: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wr4ff May 3 12:16:16.732: INFO: Scaling statefulset ss2 to 0 May 3 12:16:36.763: INFO: Waiting for statefulset status.replicas updated to 0 May 3 12:16:36.766: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:16:36.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-wr4ff" for this suite. May 3 12:16:42.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:16:42.852: INFO: namespace: e2e-tests-statefulset-wr4ff, resource: bindings, ignored listing per whitelist May 3 12:16:42.902: INFO: namespace e2e-tests-statefulset-wr4ff deletion completed in 6.121182239s • [SLOW TEST:147.794 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:16:42.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-77wxw.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-77wxw.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-77wxw.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-77wxw.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-77wxw.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-77wxw.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 3 12:16:49.095: INFO: DNS probes using e2e-tests-dns-77wxw/dns-test-f31d02d0-8d37-11ea-b78d-0242ac110017 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:16:49.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-77wxw" for this suite. May 3 12:16:55.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:16:55.194: INFO: namespace: e2e-tests-dns-77wxw, resource: bindings, ignored listing per whitelist May 3 12:16:55.230: INFO: namespace e2e-tests-dns-77wxw deletion completed in 6.088792024s • [SLOW TEST:12.328 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:16:55.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 3 12:16:55.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-jhwnq' May 3 12:16:55.441: INFO: stderr: "" May 3 12:16:55.441: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 3 12:16:55.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jhwnq' May 3 12:17:01.275: INFO: stderr: "" May 3 12:17:01.275: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:17:01.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jhwnq" for this suite. May 3 12:17:07.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:17:07.379: INFO: namespace: e2e-tests-kubectl-jhwnq, resource: bindings, ignored listing per whitelist May 3 12:17:07.387: INFO: namespace e2e-tests-kubectl-jhwnq deletion completed in 6.084100596s • [SLOW TEST:12.157 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:17:07.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 3 12:17:07.486: INFO: Waiting up to 5m0s for pod "var-expansion-01b7a8da-8d38-11ea-b78d-0242ac110017" in namespace "e2e-tests-var-expansion-pp5x2" to be "success or failure" May 3 12:17:07.490: INFO: Pod "var-expansion-01b7a8da-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.521488ms May 3 12:17:09.512: INFO: Pod "var-expansion-01b7a8da-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025918721s May 3 12:17:11.517: INFO: Pod "var-expansion-01b7a8da-8d38-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030571616s STEP: Saw pod success May 3 12:17:11.517: INFO: Pod "var-expansion-01b7a8da-8d38-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:17:11.520: INFO: Trying to get logs from node hunter-worker pod var-expansion-01b7a8da-8d38-11ea-b78d-0242ac110017 container dapi-container: STEP: delete the pod May 3 12:17:11.555: INFO: Waiting for pod var-expansion-01b7a8da-8d38-11ea-b78d-0242ac110017 to disappear May 3 12:17:11.584: INFO: Pod var-expansion-01b7a8da-8d38-11ea-b78d-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:17:11.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-pp5x2" for this suite. May 3 12:17:17.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:17:17.666: INFO: namespace: e2e-tests-var-expansion-pp5x2, resource: bindings, ignored listing per whitelist May 3 12:17:17.700: INFO: namespace e2e-tests-var-expansion-pp5x2 deletion completed in 6.112937645s • [SLOW TEST:10.313 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:17:17.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 3 12:17:17.827: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hrgh5,SelfLink:/api/v1/namespaces/e2e-tests-watch-hrgh5/configmaps/e2e-watch-test-resource-version,UID:07debffc-8d38-11ea-99e8-0242ac110002,ResourceVersion:8535722,Generation:0,CreationTimestamp:2020-05-03 12:17:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 3 12:17:17.827: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hrgh5,SelfLink:/api/v1/namespaces/e2e-tests-watch-hrgh5/configmaps/e2e-watch-test-resource-version,UID:07debffc-8d38-11ea-99e8-0242ac110002,ResourceVersion:8535723,Generation:0,CreationTimestamp:2020-05-03 12:17:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:17:17.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-hrgh5" for this suite. May 3 12:17:23.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:17:23.890: INFO: namespace: e2e-tests-watch-hrgh5, resource: bindings, ignored listing per whitelist May 3 12:17:23.913: INFO: namespace e2e-tests-watch-hrgh5 deletion completed in 6.081905412s • [SLOW TEST:6.212 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:17:23.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 12:17:24.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b944f6b-8d38-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-ch48g" to be "success or failure" May 3 12:17:24.027: INFO: Pod "downwardapi-volume-0b944f6b-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328056ms May 3 12:17:26.070: INFO: Pod "downwardapi-volume-0b944f6b-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047311235s May 3 12:17:28.074: INFO: Pod "downwardapi-volume-0b944f6b-8d38-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051453147s STEP: Saw pod success May 3 12:17:28.074: INFO: Pod "downwardapi-volume-0b944f6b-8d38-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:17:28.077: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0b944f6b-8d38-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 12:17:28.276: INFO: Waiting for pod downwardapi-volume-0b944f6b-8d38-11ea-b78d-0242ac110017 to disappear May 3 12:17:28.285: INFO: Pod downwardapi-volume-0b944f6b-8d38-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:17:28.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ch48g" for this suite. May 3 12:17:34.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:17:34.340: INFO: namespace: e2e-tests-projected-ch48g, resource: bindings, ignored listing per whitelist May 3 12:17:34.377: INFO: namespace e2e-tests-projected-ch48g deletion completed in 6.088749974s • [SLOW TEST:10.464 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:17:34.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-cv4fc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-cv4fc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-cv4fc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-cv4fc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-cv4fc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-cv4fc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-cv4fc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-cv4fc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-cv4fc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-cv4fc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-cv4fc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 177.12.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.12.177_udp@PTR;check="$$(dig +tcp +noall +answer +search 177.12.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.12.177_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-cv4fc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-cv4fc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-cv4fc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-cv4fc.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-cv4fc.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-cv4fc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-cv4fc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-cv4fc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-cv4fc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-cv4fc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 177.12.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.12.177_udp@PTR;check="$$(dig +tcp +noall +answer +search 177.12.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.12.177_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 3 12:17:40.641: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.667: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.671: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.705: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.708: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.711: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.713: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.715: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.726: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.729: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.731: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:40.745: INFO: Lookups using e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-cv4fc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc] May 3 12:17:45.750: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.765: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.768: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.792: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.795: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.798: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.801: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.804: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.807: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.810: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.813: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:45.831: INFO: Lookups using e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-cv4fc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc] May 3 12:17:50.750: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.762: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.765: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.787: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.789: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.791: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.794: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.796: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.799: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.802: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.804: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:50.821: INFO: Lookups using e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-cv4fc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc] May 3 12:17:55.750: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.768: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.772: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.796: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.798: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.801: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.804: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.807: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.810: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.812: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.815: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:17:55.834: INFO: Lookups using e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-cv4fc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc] May 3 12:18:00.748: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.777: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.780: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.845: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.848: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.851: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.854: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.857: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.860: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.863: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.867: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:00.886: INFO: Lookups using e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-cv4fc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc] May 3 12:18:05.749: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.765: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.767: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.790: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.793: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.795: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.798: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.802: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.805: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.808: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.810: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc from pod e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017: the server could not find the requested resource (get pods dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017) May 3 12:18:05.829: INFO: Lookups using e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-cv4fc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc jessie_udp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@dns-test-service.e2e-tests-dns-cv4fc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cv4fc.svc] May 3 12:18:10.937: INFO: DNS probes using e2e-tests-dns-cv4fc/dns-test-11d6cc84-8d38-11ea-b78d-0242ac110017 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:18:11.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-cv4fc" for this suite. May 3 12:18:17.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:18:17.419: INFO: namespace: e2e-tests-dns-cv4fc, resource: bindings, ignored listing per whitelist May 3 12:18:17.449: INFO: namespace e2e-tests-dns-cv4fc deletion completed in 6.105239141s • [SLOW TEST:43.072 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:18:17.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:18:24.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-f2zb5" for this suite. May 3 12:18:46.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:18:46.620: INFO: namespace: e2e-tests-replication-controller-f2zb5, resource: bindings, ignored listing per whitelist May 3 12:18:46.671: INFO: namespace e2e-tests-replication-controller-f2zb5 deletion completed in 22.095809755s • [SLOW TEST:29.222 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:18:46.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-3ceca3a1-8d38-11ea-b78d-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-3ceca410-8d38-11ea-b78d-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-3ceca3a1-8d38-11ea-b78d-0242ac110017 STEP: Updating configmap cm-test-opt-upd-3ceca410-8d38-11ea-b78d-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-3ceca44c-8d38-11ea-b78d-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:18:54.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gn522" for this suite. May 3 12:19:16.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:19:16.968: INFO: namespace: e2e-tests-projected-gn522, resource: bindings, ignored listing per whitelist May 3 12:19:17.009: INFO: namespace e2e-tests-projected-gn522 deletion completed in 22.093431335s • [SLOW TEST:30.338 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:19:17.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 3 12:19:17.175: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bfgpf,SelfLink:/api/v1/namespaces/e2e-tests-watch-bfgpf/configmaps/e2e-watch-test-label-changed,UID:4efeebd0-8d38-11ea-99e8-0242ac110002,ResourceVersion:8536115,Generation:0,CreationTimestamp:2020-05-03 12:19:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 3 12:19:17.175: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bfgpf,SelfLink:/api/v1/namespaces/e2e-tests-watch-bfgpf/configmaps/e2e-watch-test-label-changed,UID:4efeebd0-8d38-11ea-99e8-0242ac110002,ResourceVersion:8536116,Generation:0,CreationTimestamp:2020-05-03 12:19:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 3 12:19:17.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bfgpf,SelfLink:/api/v1/namespaces/e2e-tests-watch-bfgpf/configmaps/e2e-watch-test-label-changed,UID:4efeebd0-8d38-11ea-99e8-0242ac110002,ResourceVersion:8536117,Generation:0,CreationTimestamp:2020-05-03 12:19:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 3 12:19:27.324: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bfgpf,SelfLink:/api/v1/namespaces/e2e-tests-watch-bfgpf/configmaps/e2e-watch-test-label-changed,UID:4efeebd0-8d38-11ea-99e8-0242ac110002,ResourceVersion:8536139,Generation:0,CreationTimestamp:2020-05-03 12:19:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 3 12:19:27.325: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bfgpf,SelfLink:/api/v1/namespaces/e2e-tests-watch-bfgpf/configmaps/e2e-watch-test-label-changed,UID:4efeebd0-8d38-11ea-99e8-0242ac110002,ResourceVersion:8536140,Generation:0,CreationTimestamp:2020-05-03 12:19:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 3 12:19:27.325: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bfgpf,SelfLink:/api/v1/namespaces/e2e-tests-watch-bfgpf/configmaps/e2e-watch-test-label-changed,UID:4efeebd0-8d38-11ea-99e8-0242ac110002,ResourceVersion:8536141,Generation:0,CreationTimestamp:2020-05-03 12:19:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:19:27.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-bfgpf" for this suite. May 3 12:19:33.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:19:33.397: INFO: namespace: e2e-tests-watch-bfgpf, resource: bindings, ignored listing per whitelist May 3 12:19:33.430: INFO: namespace e2e-tests-watch-bfgpf deletion completed in 6.099365605s • [SLOW TEST:16.421 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:19:33.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 3 12:19:33.530: INFO: Waiting up to 5m0s for pod "pod-58c43d72-8d38-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-kchlm" to be "success or failure" May 3 12:19:33.534: INFO: Pod "pod-58c43d72-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.999089ms May 3 12:19:35.575: INFO: Pod "pod-58c43d72-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044656776s May 3 12:19:37.592: INFO: Pod "pod-58c43d72-8d38-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062216819s STEP: Saw pod success May 3 12:19:37.592: INFO: Pod "pod-58c43d72-8d38-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:19:37.595: INFO: Trying to get logs from node hunter-worker2 pod pod-58c43d72-8d38-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 12:19:37.627: INFO: Waiting for pod pod-58c43d72-8d38-11ea-b78d-0242ac110017 to disappear May 3 12:19:37.635: INFO: Pod pod-58c43d72-8d38-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:19:37.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kchlm" for this suite. May 3 12:19:43.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:19:43.707: INFO: namespace: e2e-tests-emptydir-kchlm, resource: bindings, ignored listing per whitelist May 3 12:19:43.734: INFO: namespace e2e-tests-emptydir-kchlm deletion completed in 6.095157924s • [SLOW TEST:10.304 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:19:43.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 3 12:19:43.862: INFO: Waiting up to 5m0s for pod "downward-api-5eee5929-8d38-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-mc2kw" to be "success or failure" May 3 12:19:43.875: INFO: Pod "downward-api-5eee5929-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.987526ms May 3 12:19:45.894: INFO: Pod "downward-api-5eee5929-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032272465s May 3 12:19:47.899: INFO: Pod "downward-api-5eee5929-8d38-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037065774s STEP: Saw pod success May 3 12:19:47.899: INFO: Pod "downward-api-5eee5929-8d38-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:19:47.902: INFO: Trying to get logs from node hunter-worker pod downward-api-5eee5929-8d38-11ea-b78d-0242ac110017 container dapi-container: STEP: delete the pod May 3 12:19:47.939: INFO: Waiting for pod downward-api-5eee5929-8d38-11ea-b78d-0242ac110017 to disappear May 3 12:19:47.948: INFO: Pod downward-api-5eee5929-8d38-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:19:47.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mc2kw" for this suite. May 3 12:19:54.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:19:54.058: INFO: namespace: e2e-tests-downward-api-mc2kw, resource: bindings, ignored listing per whitelist May 3 12:19:54.087: INFO: namespace e2e-tests-downward-api-mc2kw deletion completed in 6.135835851s • [SLOW TEST:10.353 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:19:54.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-s4mk4 STEP: creating a selector STEP: Creating the service pods in kubernetes May 3 12:19:54.204: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 3 12:20:20.380: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.141:8080/dial?request=hostName&protocol=udp&host=10.244.1.140&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-s4mk4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:20:20.380: INFO: >>> kubeConfig: /root/.kube/config I0503 12:20:20.419715 6 log.go:172] (0xc0000eafd0) (0xc00189e1e0) Create stream I0503 12:20:20.419749 6 log.go:172] (0xc0000eafd0) (0xc00189e1e0) Stream added, broadcasting: 1 I0503 12:20:20.422117 6 log.go:172] (0xc0000eafd0) Reply frame received for 1 I0503 12:20:20.422171 6 log.go:172] (0xc0000eafd0) (0xc001ac1400) Create stream I0503 12:20:20.422192 6 log.go:172] (0xc0000eafd0) (0xc001ac1400) Stream added, broadcasting: 3 I0503 12:20:20.423279 6 log.go:172] (0xc0000eafd0) Reply frame received for 3 I0503 12:20:20.423349 6 log.go:172] (0xc0000eafd0) (0xc001a39400) Create stream I0503 12:20:20.423383 6 log.go:172] (0xc0000eafd0) (0xc001a39400) Stream added, broadcasting: 5 I0503 12:20:20.424267 6 log.go:172] (0xc0000eafd0) Reply frame received for 5 I0503 12:20:20.489624 6 log.go:172] (0xc0000eafd0) Data frame received for 3 I0503 12:20:20.489690 6 log.go:172] (0xc001ac1400) (3) Data frame handling I0503 12:20:20.489717 6 log.go:172] (0xc001ac1400) (3) Data frame sent I0503 12:20:20.489732 6 log.go:172] (0xc0000eafd0) Data frame received for 3 I0503 12:20:20.489747 6 log.go:172] (0xc001ac1400) (3) Data frame handling I0503 12:20:20.489974 6 log.go:172] (0xc0000eafd0) Data frame received for 5 I0503 12:20:20.490013 6 log.go:172] (0xc001a39400) (5) Data frame handling I0503 12:20:20.492236 6 log.go:172] (0xc0000eafd0) Data frame received for 1 I0503 12:20:20.492284 6 log.go:172] (0xc00189e1e0) (1) Data frame handling I0503 12:20:20.492316 6 log.go:172] (0xc00189e1e0) (1) Data frame sent I0503 12:20:20.492344 6 log.go:172] (0xc0000eafd0) (0xc00189e1e0) Stream removed, broadcasting: 1 I0503 12:20:20.492364 6 log.go:172] (0xc0000eafd0) Go away received I0503 12:20:20.492499 6 log.go:172] (0xc0000eafd0) (0xc00189e1e0) Stream removed, broadcasting: 1 I0503 12:20:20.492530 6 log.go:172] (0xc0000eafd0) (0xc001ac1400) Stream removed, broadcasting: 3 I0503 12:20:20.492549 6 log.go:172] (0xc0000eafd0) (0xc001a39400) Stream removed, broadcasting: 5 May 3 12:20:20.492: INFO: Waiting for endpoints: map[] May 3 12:20:20.496: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.141:8080/dial?request=hostName&protocol=udp&host=10.244.2.127&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-s4mk4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:20:20.496: INFO: >>> kubeConfig: /root/.kube/config I0503 12:20:20.533510 6 log.go:172] (0xc000da04d0) (0xc00187fae0) Create stream I0503 12:20:20.533539 6 log.go:172] (0xc000da04d0) (0xc00187fae0) Stream added, broadcasting: 1 I0503 12:20:20.535555 6 log.go:172] (0xc000da04d0) Reply frame received for 1 I0503 12:20:20.535601 6 log.go:172] (0xc000da04d0) (0xc00187fb80) Create stream I0503 12:20:20.535631 6 log.go:172] (0xc000da04d0) (0xc00187fb80) Stream added, broadcasting: 3 I0503 12:20:20.536536 6 log.go:172] (0xc000da04d0) Reply frame received for 3 I0503 12:20:20.536583 6 log.go:172] (0xc000da04d0) (0xc00067ac80) Create stream I0503 12:20:20.536598 6 log.go:172] (0xc000da04d0) (0xc00067ac80) Stream added, broadcasting: 5 I0503 12:20:20.537754 6 log.go:172] (0xc000da04d0) Reply frame received for 5 I0503 12:20:20.610721 6 log.go:172] (0xc000da04d0) Data frame received for 3 I0503 12:20:20.610750 6 log.go:172] (0xc00187fb80) (3) Data frame handling I0503 12:20:20.610776 6 log.go:172] (0xc00187fb80) (3) Data frame sent I0503 12:20:20.611208 6 log.go:172] (0xc000da04d0) Data frame received for 5 I0503 12:20:20.611231 6 log.go:172] (0xc00067ac80) (5) Data frame handling I0503 12:20:20.611264 6 log.go:172] (0xc000da04d0) Data frame received for 3 I0503 12:20:20.611290 6 log.go:172] (0xc00187fb80) (3) Data frame handling I0503 12:20:20.612940 6 log.go:172] (0xc000da04d0) Data frame received for 1 I0503 12:20:20.612991 6 log.go:172] (0xc00187fae0) (1) Data frame handling I0503 12:20:20.613038 6 log.go:172] (0xc00187fae0) (1) Data frame sent I0503 12:20:20.613087 6 log.go:172] (0xc000da04d0) (0xc00187fae0) Stream removed, broadcasting: 1 I0503 12:20:20.613366 6 log.go:172] (0xc000da04d0) Go away received I0503 12:20:20.613434 6 log.go:172] (0xc000da04d0) (0xc00187fae0) Stream removed, broadcasting: 1 I0503 12:20:20.613483 6 log.go:172] (0xc000da04d0) (0xc00187fb80) Stream removed, broadcasting: 3 I0503 12:20:20.613518 6 log.go:172] (0xc000da04d0) (0xc00067ac80) Stream removed, broadcasting: 5 May 3 12:20:20.613: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:20:20.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-s4mk4" for this suite. May 3 12:20:42.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:20:42.661: INFO: namespace: e2e-tests-pod-network-test-s4mk4, resource: bindings, ignored listing per whitelist May 3 12:20:42.733: INFO: namespace e2e-tests-pod-network-test-s4mk4 deletion completed in 22.115630595s • [SLOW TEST:48.646 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:20:42.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-4hv8k [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 3 12:20:42.882: INFO: Found 0 stateful pods, waiting for 3 May 3 12:20:52.887: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 3 12:20:52.887: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 3 12:20:52.887: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 3 12:21:02.888: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 3 12:21:02.888: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 3 12:21:02.888: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 3 12:21:02.916: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 3 12:21:12.968: INFO: Updating stateful set ss2 May 3 12:21:12.977: INFO: Waiting for Pod e2e-tests-statefulset-4hv8k/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 3 12:21:23.091: INFO: Found 2 stateful pods, waiting for 3 May 3 12:21:33.096: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 3 12:21:33.096: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 3 12:21:33.096: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 3 12:21:33.122: INFO: Updating stateful set ss2 May 3 12:21:33.167: INFO: Waiting for Pod e2e-tests-statefulset-4hv8k/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 3 12:21:43.193: INFO: Updating stateful set ss2 May 3 12:21:43.199: INFO: Waiting for StatefulSet e2e-tests-statefulset-4hv8k/ss2 to complete update May 3 12:21:43.199: INFO: Waiting for Pod e2e-tests-statefulset-4hv8k/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 3 12:21:53.208: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4hv8k May 3 12:21:53.211: INFO: Scaling statefulset ss2 to 0 May 3 12:22:13.231: INFO: Waiting for statefulset status.replicas updated to 0 May 3 12:22:13.234: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:22:13.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-4hv8k" for this suite. May 3 12:22:19.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:22:19.335: INFO: namespace: e2e-tests-statefulset-4hv8k, resource: bindings, ignored listing per whitelist May 3 12:22:19.364: INFO: namespace e2e-tests-statefulset-4hv8k deletion completed in 6.094426942s • [SLOW TEST:96.630 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:22:19.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-bbb19195-8d38-11ea-b78d-0242ac110017 STEP: Creating secret with name s-test-opt-upd-bbb19212-8d38-11ea-b78d-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-bbb19195-8d38-11ea-b78d-0242ac110017 STEP: Updating secret s-test-opt-upd-bbb19212-8d38-11ea-b78d-0242ac110017 STEP: Creating secret with name s-test-opt-create-bbb19241-8d38-11ea-b78d-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:22:27.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jcdmr" for this suite. May 3 12:22:49.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:22:49.679: INFO: namespace: e2e-tests-secrets-jcdmr, resource: bindings, ignored listing per whitelist May 3 12:22:49.737: INFO: namespace e2e-tests-secrets-jcdmr deletion completed in 22.093216584s • [SLOW TEST:30.373 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:22:49.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 3 12:22:56.751: INFO: 10 pods remaining May 3 12:22:56.751: INFO: 10 pods has nil DeletionTimestamp May 3 12:22:56.751: INFO: May 3 12:22:58.035: INFO: 9 pods remaining May 3 12:22:58.035: INFO: 0 pods has nil DeletionTimestamp May 3 12:22:58.035: INFO: STEP: Gathering metrics W0503 12:22:59.575325 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 3 12:22:59.575: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:22:59.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2ctvr" for this suite. May 3 12:23:05.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:23:05.622: INFO: namespace: e2e-tests-gc-2ctvr, resource: bindings, ignored listing per whitelist May 3 12:23:05.678: INFO: namespace e2e-tests-gc-2ctvr deletion completed in 6.095663788s • [SLOW TEST:15.941 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:23:05.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 3 12:23:05.805: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 3 12:23:05.818: INFO: Waiting for terminating namespaces to be deleted... May 3 12:23:05.821: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 3 12:23:05.828: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 3 12:23:05.828: INFO: Container kube-proxy ready: true, restart count 0 May 3 12:23:05.828: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 3 12:23:05.828: INFO: Container kindnet-cni ready: true, restart count 0 May 3 12:23:05.828: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 3 12:23:05.828: INFO: Container coredns ready: true, restart count 0 May 3 12:23:05.828: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 3 12:23:05.833: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 3 12:23:05.833: INFO: Container kindnet-cni ready: true, restart count 0 May 3 12:23:05.833: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 3 12:23:05.833: INFO: Container coredns ready: true, restart count 0 May 3 12:23:05.833: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 3 12:23:05.833: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 3 12:23:05.951: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 3 12:23:05.951: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 3 12:23:05.951: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 3 12:23:05.951: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 3 12:23:05.951: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 3 12:23:05.951: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-d763460b-8d38-11ea-b78d-0242ac110017.160b84248521ecc4], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-nmgzb/filler-pod-d763460b-8d38-11ea-b78d-0242ac110017 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-d763460b-8d38-11ea-b78d-0242ac110017.160b8424d1b6affc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d763460b-8d38-11ea-b78d-0242ac110017.160b84251dca93cc], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d763460b-8d38-11ea-b78d-0242ac110017.160b84253236bfe9], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d7643cc2-8d38-11ea-b78d-0242ac110017.160b842489e92fac], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-nmgzb/filler-pod-d7643cc2-8d38-11ea-b78d-0242ac110017 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d7643cc2-8d38-11ea-b78d-0242ac110017.160b84250c12a1d1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d7643cc2-8d38-11ea-b78d-0242ac110017.160b84254306b2de], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d7643cc2-8d38-11ea-b78d-0242ac110017.160b842553872a17], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160b8425f0aa0f12], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:23:13.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nmgzb" for this suite. May 3 12:23:21.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:23:21.225: INFO: namespace: e2e-tests-sched-pred-nmgzb, resource: bindings, ignored listing per whitelist May 3 12:23:21.263: INFO: namespace e2e-tests-sched-pred-nmgzb deletion completed in 8.088019707s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:15.585 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:23:21.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 3 12:23:21.356: INFO: Waiting up to 5m0s for pod "var-expansion-e09086ad-8d38-11ea-b78d-0242ac110017" in namespace "e2e-tests-var-expansion-ln9gf" to be "success or failure" May 3 12:23:21.410: INFO: Pod "var-expansion-e09086ad-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 53.931191ms May 3 12:23:23.414: INFO: Pod "var-expansion-e09086ad-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057675886s May 3 12:23:25.418: INFO: Pod "var-expansion-e09086ad-8d38-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062051788s STEP: Saw pod success May 3 12:23:25.418: INFO: Pod "var-expansion-e09086ad-8d38-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:23:25.421: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-e09086ad-8d38-11ea-b78d-0242ac110017 container dapi-container: STEP: delete the pod May 3 12:23:25.462: INFO: Waiting for pod var-expansion-e09086ad-8d38-11ea-b78d-0242ac110017 to disappear May 3 12:23:25.474: INFO: Pod var-expansion-e09086ad-8d38-11ea-b78d-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:23:25.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-ln9gf" for this suite. May 3 12:23:31.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:23:31.576: INFO: namespace: e2e-tests-var-expansion-ln9gf, resource: bindings, ignored listing per whitelist May 3 12:23:31.626: INFO: namespace e2e-tests-var-expansion-ln9gf deletion completed in 6.149468995s • [SLOW TEST:10.363 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:23:31.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 12:23:31.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6bf4297-8d38-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-6v2z8" to be "success or failure" May 3 12:23:31.744: INFO: Pod "downwardapi-volume-e6bf4297-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.228101ms May 3 12:23:33.818: INFO: Pod "downwardapi-volume-e6bf4297-8d38-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083090271s May 3 12:23:35.821: INFO: Pod "downwardapi-volume-e6bf4297-8d38-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087047503s STEP: Saw pod success May 3 12:23:35.822: INFO: Pod "downwardapi-volume-e6bf4297-8d38-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:23:35.824: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e6bf4297-8d38-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 12:23:35.859: INFO: Waiting for pod downwardapi-volume-e6bf4297-8d38-11ea-b78d-0242ac110017 to disappear May 3 12:23:35.863: INFO: Pod downwardapi-volume-e6bf4297-8d38-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:23:35.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6v2z8" for this suite. May 3 12:23:41.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:23:41.958: INFO: namespace: e2e-tests-downward-api-6v2z8, resource: bindings, ignored listing per whitelist May 3 12:23:41.959: INFO: namespace e2e-tests-downward-api-6v2z8 deletion completed in 6.091818649s • [SLOW TEST:10.332 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:23:41.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 3 12:23:42.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xkqlb' May 3 12:23:44.662: INFO: stderr: "" May 3 12:23:44.662: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 3 12:23:45.667: INFO: Selector matched 1 pods for map[app:redis] May 3 12:23:45.667: INFO: Found 0 / 1 May 3 12:23:46.812: INFO: Selector matched 1 pods for map[app:redis] May 3 12:23:46.812: INFO: Found 0 / 1 May 3 12:23:47.674: INFO: Selector matched 1 pods for map[app:redis] May 3 12:23:47.674: INFO: Found 0 / 1 May 3 12:23:48.667: INFO: Selector matched 1 pods for map[app:redis] May 3 12:23:48.667: INFO: Found 1 / 1 May 3 12:23:48.667: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 3 12:23:48.670: INFO: Selector matched 1 pods for map[app:redis] May 3 12:23:48.671: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 3 12:23:48.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p9csc redis-master --namespace=e2e-tests-kubectl-xkqlb' May 3 12:23:48.784: INFO: stderr: "" May 3 12:23:48.784: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 May 12:23:47.819 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 May 12:23:47.819 # Server started, Redis version 3.2.12\n1:M 03 May 12:23:47.819 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 May 12:23:47.819 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 3 12:23:48.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9csc redis-master --namespace=e2e-tests-kubectl-xkqlb --tail=1' May 3 12:23:48.908: INFO: stderr: "" May 3 12:23:48.908: INFO: stdout: "1:M 03 May 12:23:47.819 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 3 12:23:48.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9csc redis-master --namespace=e2e-tests-kubectl-xkqlb --limit-bytes=1' May 3 12:23:49.025: INFO: stderr: "" May 3 12:23:49.025: INFO: stdout: " " STEP: exposing timestamps May 3 12:23:49.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9csc redis-master --namespace=e2e-tests-kubectl-xkqlb --tail=1 --timestamps' May 3 12:23:49.131: INFO: stderr: "" May 3 12:23:49.131: INFO: stdout: "2020-05-03T12:23:47.82194035Z 1:M 03 May 12:23:47.819 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 3 12:23:51.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9csc redis-master --namespace=e2e-tests-kubectl-xkqlb --since=1s' May 3 12:23:51.743: INFO: stderr: "" May 3 12:23:51.743: INFO: stdout: "" May 3 12:23:51.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9csc redis-master --namespace=e2e-tests-kubectl-xkqlb --since=24h' May 3 12:23:51.866: INFO: stderr: "" May 3 12:23:51.866: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 May 12:23:47.819 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 May 12:23:47.819 # Server started, Redis version 3.2.12\n1:M 03 May 12:23:47.819 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 May 12:23:47.819 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 3 12:23:51.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xkqlb' May 3 12:23:51.987: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 12:23:51.987: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 3 12:23:51.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-xkqlb' May 3 12:23:52.086: INFO: stderr: "No resources found.\n" May 3 12:23:52.086: INFO: stdout: "" May 3 12:23:52.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-xkqlb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 3 12:23:52.178: INFO: stderr: "" May 3 12:23:52.178: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:23:52.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xkqlb" for this suite. May 3 12:24:14.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:24:14.321: INFO: namespace: e2e-tests-kubectl-xkqlb, resource: bindings, ignored listing per whitelist May 3 12:24:14.329: INFO: namespace e2e-tests-kubectl-xkqlb deletion completed in 22.147799978s • [SLOW TEST:32.370 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:24:14.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:24:14.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 3 12:24:14.562: INFO: stderr: "" May 3 12:24:14.562: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:24:14.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nms8x" for this suite. May 3 12:24:20.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:24:20.638: INFO: namespace: e2e-tests-kubectl-nms8x, resource: bindings, ignored listing per whitelist May 3 12:24:20.666: INFO: namespace e2e-tests-kubectl-nms8x deletion completed in 6.099382886s • [SLOW TEST:6.337 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:24:20.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-03ffc7f5-8d39-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 12:24:20.807: INFO: Waiting up to 5m0s for pod "pod-secrets-04005637-8d39-11ea-b78d-0242ac110017" in namespace "e2e-tests-secrets-stkrx" to be "success or failure" May 3 12:24:20.811: INFO: Pod "pod-secrets-04005637-8d39-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.672696ms May 3 12:24:22.815: INFO: Pod "pod-secrets-04005637-8d39-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008193707s May 3 12:24:24.819: INFO: Pod "pod-secrets-04005637-8d39-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011474675s STEP: Saw pod success May 3 12:24:24.819: INFO: Pod "pod-secrets-04005637-8d39-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:24:24.821: INFO: Trying to get logs from node hunter-worker pod pod-secrets-04005637-8d39-11ea-b78d-0242ac110017 container secret-volume-test: STEP: delete the pod May 3 12:24:24.858: INFO: Waiting for pod pod-secrets-04005637-8d39-11ea-b78d-0242ac110017 to disappear May 3 12:24:24.870: INFO: Pod pod-secrets-04005637-8d39-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:24:24.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-stkrx" for this suite. May 3 12:24:30.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:24:30.977: INFO: namespace: e2e-tests-secrets-stkrx, resource: bindings, ignored listing per whitelist May 3 12:24:30.983: INFO: namespace e2e-tests-secrets-stkrx deletion completed in 6.092100305s • [SLOW TEST:10.316 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:24:30.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 3 12:24:31.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:31.354: INFO: stderr: "" May 3 12:24:31.354: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 3 12:24:31.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:31.484: INFO: stderr: "" May 3 12:24:31.484: INFO: stdout: "update-demo-nautilus-6pjj4 update-demo-nautilus-8vrnw " May 3 12:24:31.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pjj4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:31.595: INFO: stderr: "" May 3 12:24:31.595: INFO: stdout: "" May 3 12:24:31.595: INFO: update-demo-nautilus-6pjj4 is created but not running May 3 12:24:36.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:36.707: INFO: stderr: "" May 3 12:24:36.707: INFO: stdout: "update-demo-nautilus-6pjj4 update-demo-nautilus-8vrnw " May 3 12:24:36.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pjj4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:36.807: INFO: stderr: "" May 3 12:24:36.807: INFO: stdout: "true" May 3 12:24:36.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pjj4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:36.912: INFO: stderr: "" May 3 12:24:36.912: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 3 12:24:36.912: INFO: validating pod update-demo-nautilus-6pjj4 May 3 12:24:36.916: INFO: got data: { "image": "nautilus.jpg" } May 3 12:24:36.916: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 3 12:24:36.916: INFO: update-demo-nautilus-6pjj4 is verified up and running May 3 12:24:36.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vrnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:37.025: INFO: stderr: "" May 3 12:24:37.025: INFO: stdout: "true" May 3 12:24:37.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vrnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:37.116: INFO: stderr: "" May 3 12:24:37.116: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 3 12:24:37.116: INFO: validating pod update-demo-nautilus-8vrnw May 3 12:24:37.119: INFO: got data: { "image": "nautilus.jpg" } May 3 12:24:37.119: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 3 12:24:37.119: INFO: update-demo-nautilus-8vrnw is verified up and running STEP: scaling down the replication controller May 3 12:24:37.122: INFO: scanned /root for discovery docs: May 3 12:24:37.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:38.361: INFO: stderr: "" May 3 12:24:38.361: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 3 12:24:38.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:38.521: INFO: stderr: "" May 3 12:24:38.521: INFO: stdout: "update-demo-nautilus-6pjj4 update-demo-nautilus-8vrnw " STEP: Replicas for name=update-demo: expected=1 actual=2 May 3 12:24:43.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:43.633: INFO: stderr: "" May 3 12:24:43.633: INFO: stdout: "update-demo-nautilus-8vrnw " May 3 12:24:43.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vrnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:43.735: INFO: stderr: "" May 3 12:24:43.735: INFO: stdout: "true" May 3 12:24:43.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vrnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:43.854: INFO: stderr: "" May 3 12:24:43.854: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 3 12:24:43.854: INFO: validating pod update-demo-nautilus-8vrnw May 3 12:24:43.858: INFO: got data: { "image": "nautilus.jpg" } May 3 12:24:43.858: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 3 12:24:43.858: INFO: update-demo-nautilus-8vrnw is verified up and running STEP: scaling up the replication controller May 3 12:24:43.860: INFO: scanned /root for discovery docs: May 3 12:24:43.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:45.024: INFO: stderr: "" May 3 12:24:45.024: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 3 12:24:45.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:45.127: INFO: stderr: "" May 3 12:24:45.127: INFO: stdout: "update-demo-nautilus-8kkv8 update-demo-nautilus-8vrnw " May 3 12:24:45.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kkv8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:45.230: INFO: stderr: "" May 3 12:24:45.230: INFO: stdout: "" May 3 12:24:45.230: INFO: update-demo-nautilus-8kkv8 is created but not running May 3 12:24:50.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:50.348: INFO: stderr: "" May 3 12:24:50.348: INFO: stdout: "update-demo-nautilus-8kkv8 update-demo-nautilus-8vrnw " May 3 12:24:50.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kkv8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:50.444: INFO: stderr: "" May 3 12:24:50.444: INFO: stdout: "true" May 3 12:24:50.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kkv8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:50.539: INFO: stderr: "" May 3 12:24:50.539: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 3 12:24:50.539: INFO: validating pod update-demo-nautilus-8kkv8 May 3 12:24:50.543: INFO: got data: { "image": "nautilus.jpg" } May 3 12:24:50.543: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 3 12:24:50.543: INFO: update-demo-nautilus-8kkv8 is verified up and running May 3 12:24:50.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vrnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:50.642: INFO: stderr: "" May 3 12:24:50.642: INFO: stdout: "true" May 3 12:24:50.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vrnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:50.739: INFO: stderr: "" May 3 12:24:50.739: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 3 12:24:50.739: INFO: validating pod update-demo-nautilus-8vrnw May 3 12:24:50.742: INFO: got data: { "image": "nautilus.jpg" } May 3 12:24:50.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 3 12:24:50.742: INFO: update-demo-nautilus-8vrnw is verified up and running STEP: using delete to clean up resources May 3 12:24:50.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:50.851: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 12:24:50.852: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 3 12:24:50.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-rm45w' May 3 12:24:50.951: INFO: stderr: "No resources found.\n" May 3 12:24:50.951: INFO: stdout: "" May 3 12:24:50.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-rm45w -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 3 12:24:51.247: INFO: stderr: "" May 3 12:24:51.247: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:24:51.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rm45w" for this suite. May 3 12:25:13.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:25:13.269: INFO: namespace: e2e-tests-kubectl-rm45w, resource: bindings, ignored listing per whitelist May 3 12:25:13.339: INFO: namespace e2e-tests-kubectl-rm45w deletion completed in 22.08773659s • [SLOW TEST:42.356 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:25:13.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 3 12:25:13.449: INFO: Waiting up to 5m0s for pod "pod-236077d2-8d39-11ea-b78d-0242ac110017" in namespace "e2e-tests-emptydir-chjj4" to be "success or failure" May 3 12:25:13.469: INFO: Pod "pod-236077d2-8d39-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.975082ms May 3 12:25:15.473: INFO: Pod "pod-236077d2-8d39-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024160704s May 3 12:25:17.476: INFO: Pod "pod-236077d2-8d39-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027160784s STEP: Saw pod success May 3 12:25:17.476: INFO: Pod "pod-236077d2-8d39-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:25:17.478: INFO: Trying to get logs from node hunter-worker pod pod-236077d2-8d39-11ea-b78d-0242ac110017 container test-container: STEP: delete the pod May 3 12:25:17.709: INFO: Waiting for pod pod-236077d2-8d39-11ea-b78d-0242ac110017 to disappear May 3 12:25:17.723: INFO: Pod pod-236077d2-8d39-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:25:17.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-chjj4" for this suite. May 3 12:25:23.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:25:23.765: INFO: namespace: e2e-tests-emptydir-chjj4, resource: bindings, ignored listing per whitelist May 3 12:25:23.815: INFO: namespace e2e-tests-emptydir-chjj4 deletion completed in 6.089376039s • [SLOW TEST:10.476 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:25:23.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 3 12:25:23.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 12:25:23.985: INFO: Number of nodes with available pods: 0 May 3 12:25:23.985: INFO: Node hunter-worker is running more than one daemon pod May 3 12:25:24.989: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 12:25:24.992: INFO: Number of nodes with available pods: 0 May 3 12:25:24.992: INFO: Node hunter-worker is running more than one daemon pod May 3 12:25:26.064: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 12:25:26.067: INFO: Number of nodes with available pods: 0 May 3 12:25:26.067: INFO: Node hunter-worker is running more than one daemon pod May 3 12:25:26.989: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 12:25:26.993: INFO: Number of nodes with available pods: 0 May 3 12:25:26.993: INFO: Node hunter-worker is running more than one daemon pod May 3 12:25:27.991: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 12:25:27.994: INFO: Number of nodes with available pods: 0 May 3 12:25:27.994: INFO: Node hunter-worker is running more than one daemon pod May 3 12:25:28.991: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 12:25:28.994: INFO: Number of nodes with available pods: 2 May 3 12:25:28.994: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 3 12:25:29.032: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 3 12:25:29.040: INFO: Number of nodes with available pods: 2 May 3 12:25:29.040: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4fljd, will wait for the garbage collector to delete the pods May 3 12:25:30.131: INFO: Deleting DaemonSet.extensions daemon-set took: 6.458223ms May 3 12:25:30.331: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.269133ms May 3 12:25:41.835: INFO: Number of nodes with available pods: 0 May 3 12:25:41.835: INFO: Number of running nodes: 0, number of available pods: 0 May 3 12:25:41.837: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4fljd/daemonsets","resourceVersion":"8537764"},"items":null} May 3 12:25:41.841: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4fljd/pods","resourceVersion":"8537764"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:25:41.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4fljd" for this suite. May 3 12:25:47.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:25:47.904: INFO: namespace: e2e-tests-daemonsets-4fljd, resource: bindings, ignored listing per whitelist May 3 12:25:47.951: INFO: namespace e2e-tests-daemonsets-4fljd deletion completed in 6.098838152s • [SLOW TEST:24.136 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:25:47.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 3 12:25:48.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 3 12:25:48.234: INFO: stderr: "" May 3 12:25:48.234: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:25:48.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ck8dv" for this suite. May 3 12:25:54.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:25:54.342: INFO: namespace: e2e-tests-kubectl-ck8dv, resource: bindings, ignored listing per whitelist May 3 12:25:54.381: INFO: namespace e2e-tests-kubectl-ck8dv deletion completed in 6.143160385s • [SLOW TEST:6.430 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:25:54.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3bdfdfae-8d39-11ea-b78d-0242ac110017 STEP: Creating a pod to test consume secrets May 3 12:25:54.551: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3be06e47-8d39-11ea-b78d-0242ac110017" in namespace "e2e-tests-projected-djn8s" to be "success or failure" May 3 12:25:54.574: INFO: Pod "pod-projected-secrets-3be06e47-8d39-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.914033ms May 3 12:25:56.578: INFO: Pod "pod-projected-secrets-3be06e47-8d39-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027172862s May 3 12:25:58.581: INFO: Pod "pod-projected-secrets-3be06e47-8d39-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03058727s STEP: Saw pod success May 3 12:25:58.581: INFO: Pod "pod-projected-secrets-3be06e47-8d39-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:25:58.645: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-3be06e47-8d39-11ea-b78d-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 3 12:25:58.731: INFO: Waiting for pod pod-projected-secrets-3be06e47-8d39-11ea-b78d-0242ac110017 to disappear May 3 12:25:58.746: INFO: Pod pod-projected-secrets-3be06e47-8d39-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:25:58.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-djn8s" for this suite. May 3 12:26:04.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:26:04.861: INFO: namespace: e2e-tests-projected-djn8s, resource: bindings, ignored listing per whitelist May 3 12:26:04.868: INFO: namespace e2e-tests-projected-djn8s deletion completed in 6.118160537s • [SLOW TEST:10.487 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:26:04.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 3 12:26:15.079: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:26:15.079: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:15.119609 6 log.go:172] (0xc0017c42c0) (0xc000c6dcc0) Create stream I0503 12:26:15.119650 6 log.go:172] (0xc0017c42c0) (0xc000c6dcc0) Stream added, broadcasting: 1 I0503 12:26:15.121669 6 log.go:172] (0xc0017c42c0) Reply frame received for 1 I0503 12:26:15.121719 6 log.go:172] (0xc0017c42c0) (0xc000b73400) Create stream I0503 12:26:15.121733 6 log.go:172] (0xc0017c42c0) (0xc000b73400) Stream added, broadcasting: 3 I0503 12:26:15.122724 6 log.go:172] (0xc0017c42c0) Reply frame received for 3 I0503 12:26:15.122777 6 log.go:172] (0xc0017c42c0) (0xc0018d3e00) Create stream I0503 12:26:15.122803 6 log.go:172] (0xc0017c42c0) (0xc0018d3e00) Stream added, broadcasting: 5 I0503 12:26:15.123733 6 log.go:172] (0xc0017c42c0) Reply frame received for 5 I0503 12:26:15.201089 6 log.go:172] (0xc0017c42c0) Data frame received for 5 I0503 12:26:15.201271 6 log.go:172] (0xc0018d3e00) (5) Data frame handling I0503 12:26:15.201330 6 log.go:172] (0xc0017c42c0) Data frame received for 3 I0503 12:26:15.201365 6 log.go:172] (0xc000b73400) (3) Data frame handling I0503 12:26:15.201385 6 log.go:172] (0xc000b73400) (3) Data frame sent I0503 12:26:15.201395 6 log.go:172] (0xc0017c42c0) Data frame received for 3 I0503 12:26:15.201406 6 log.go:172] (0xc000b73400) (3) Data frame handling I0503 12:26:15.203380 6 log.go:172] (0xc0017c42c0) Data frame received for 1 I0503 12:26:15.203412 6 log.go:172] (0xc000c6dcc0) (1) Data frame handling I0503 12:26:15.203428 6 log.go:172] (0xc000c6dcc0) (1) Data frame sent I0503 12:26:15.203445 6 log.go:172] (0xc0017c42c0) (0xc000c6dcc0) Stream removed, broadcasting: 1 I0503 12:26:15.203468 6 log.go:172] (0xc0017c42c0) Go away received I0503 12:26:15.203733 6 log.go:172] (0xc0017c42c0) (0xc000c6dcc0) Stream removed, broadcasting: 1 I0503 12:26:15.203751 6 log.go:172] (0xc0017c42c0) (0xc000b73400) Stream removed, broadcasting: 3 I0503 12:26:15.203759 6 log.go:172] (0xc0017c42c0) (0xc0018d3e00) Stream removed, broadcasting: 5 May 3 12:26:15.203: INFO: Exec stderr: "" May 3 12:26:15.203: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:26:15.203: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:15.234420 6 log.go:172] (0xc00184c0b0) (0xc00246b180) Create stream I0503 12:26:15.234453 6 log.go:172] (0xc00184c0b0) (0xc00246b180) Stream added, broadcasting: 1 I0503 12:26:15.237330 6 log.go:172] (0xc00184c0b0) Reply frame received for 1 I0503 12:26:15.237416 6 log.go:172] (0xc00184c0b0) (0xc00246b220) Create stream I0503 12:26:15.237437 6 log.go:172] (0xc00184c0b0) (0xc00246b220) Stream added, broadcasting: 3 I0503 12:26:15.238954 6 log.go:172] (0xc00184c0b0) Reply frame received for 3 I0503 12:26:15.239038 6 log.go:172] (0xc00184c0b0) (0xc001a120a0) Create stream I0503 12:26:15.239058 6 log.go:172] (0xc00184c0b0) (0xc001a120a0) Stream added, broadcasting: 5 I0503 12:26:15.240187 6 log.go:172] (0xc00184c0b0) Reply frame received for 5 I0503 12:26:15.312589 6 log.go:172] (0xc00184c0b0) Data frame received for 5 I0503 12:26:15.312648 6 log.go:172] (0xc001a120a0) (5) Data frame handling I0503 12:26:15.312688 6 log.go:172] (0xc00184c0b0) Data frame received for 3 I0503 12:26:15.312703 6 log.go:172] (0xc00246b220) (3) Data frame handling I0503 12:26:15.312730 6 log.go:172] (0xc00246b220) (3) Data frame sent I0503 12:26:15.312762 6 log.go:172] (0xc00184c0b0) Data frame received for 3 I0503 12:26:15.312789 6 log.go:172] (0xc00246b220) (3) Data frame handling I0503 12:26:15.314222 6 log.go:172] (0xc00184c0b0) Data frame received for 1 I0503 12:26:15.314265 6 log.go:172] (0xc00246b180) (1) Data frame handling I0503 12:26:15.314294 6 log.go:172] (0xc00246b180) (1) Data frame sent I0503 12:26:15.314321 6 log.go:172] (0xc00184c0b0) (0xc00246b180) Stream removed, broadcasting: 1 I0503 12:26:15.314415 6 log.go:172] (0xc00184c0b0) (0xc00246b180) Stream removed, broadcasting: 1 I0503 12:26:15.314444 6 log.go:172] (0xc00184c0b0) (0xc00246b220) Stream removed, broadcasting: 3 I0503 12:26:15.314460 6 log.go:172] (0xc00184c0b0) (0xc001a120a0) Stream removed, broadcasting: 5 May 3 12:26:15.314: INFO: Exec stderr: "" May 3 12:26:15.314: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0503 12:26:15.314536 6 log.go:172] (0xc00184c0b0) Go away received May 3 12:26:15.314: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:15.346699 6 log.go:172] (0xc000c248f0) (0xc000b73680) Create stream I0503 12:26:15.346722 6 log.go:172] (0xc000c248f0) (0xc000b73680) Stream added, broadcasting: 1 I0503 12:26:15.348521 6 log.go:172] (0xc000c248f0) Reply frame received for 1 I0503 12:26:15.348550 6 log.go:172] (0xc000c248f0) (0xc001a121e0) Create stream I0503 12:26:15.348558 6 log.go:172] (0xc000c248f0) (0xc001a121e0) Stream added, broadcasting: 3 I0503 12:26:15.349652 6 log.go:172] (0xc000c248f0) Reply frame received for 3 I0503 12:26:15.349695 6 log.go:172] (0xc000c248f0) (0xc001a12320) Create stream I0503 12:26:15.349711 6 log.go:172] (0xc000c248f0) (0xc001a12320) Stream added, broadcasting: 5 I0503 12:26:15.350621 6 log.go:172] (0xc000c248f0) Reply frame received for 5 I0503 12:26:15.430206 6 log.go:172] (0xc000c248f0) Data frame received for 3 I0503 12:26:15.430265 6 log.go:172] (0xc001a121e0) (3) Data frame handling I0503 12:26:15.430285 6 log.go:172] (0xc001a121e0) (3) Data frame sent I0503 12:26:15.430302 6 log.go:172] (0xc000c248f0) Data frame received for 3 I0503 12:26:15.430310 6 log.go:172] (0xc001a121e0) (3) Data frame handling I0503 12:26:15.430346 6 log.go:172] (0xc000c248f0) Data frame received for 5 I0503 12:26:15.430377 6 log.go:172] (0xc001a12320) (5) Data frame handling I0503 12:26:15.432075 6 log.go:172] (0xc000c248f0) Data frame received for 1 I0503 12:26:15.432098 6 log.go:172] (0xc000b73680) (1) Data frame handling I0503 12:26:15.432120 6 log.go:172] (0xc000b73680) (1) Data frame sent I0503 12:26:15.432146 6 log.go:172] (0xc000c248f0) (0xc000b73680) Stream removed, broadcasting: 1 I0503 12:26:15.432248 6 log.go:172] (0xc000c248f0) Go away received I0503 12:26:15.432306 6 log.go:172] (0xc000c248f0) (0xc000b73680) Stream removed, broadcasting: 1 I0503 12:26:15.432373 6 log.go:172] (0xc000c248f0) (0xc001a121e0) Stream removed, broadcasting: 3 I0503 12:26:15.432397 6 log.go:172] (0xc000c248f0) (0xc001a12320) Stream removed, broadcasting: 5 May 3 12:26:15.432: INFO: Exec stderr: "" May 3 12:26:15.432: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:26:15.432: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:15.461786 6 log.go:172] (0xc0017c4790) (0xc0028de0a0) Create stream I0503 12:26:15.461823 6 log.go:172] (0xc0017c4790) (0xc0028de0a0) Stream added, broadcasting: 1 I0503 12:26:15.463925 6 log.go:172] (0xc0017c4790) Reply frame received for 1 I0503 12:26:15.463964 6 log.go:172] (0xc0017c4790) (0xc000b737c0) Create stream I0503 12:26:15.463979 6 log.go:172] (0xc0017c4790) (0xc000b737c0) Stream added, broadcasting: 3 I0503 12:26:15.464906 6 log.go:172] (0xc0017c4790) Reply frame received for 3 I0503 12:26:15.464938 6 log.go:172] (0xc0017c4790) (0xc001a123c0) Create stream I0503 12:26:15.464956 6 log.go:172] (0xc0017c4790) (0xc001a123c0) Stream added, broadcasting: 5 I0503 12:26:15.466157 6 log.go:172] (0xc0017c4790) Reply frame received for 5 I0503 12:26:15.521729 6 log.go:172] (0xc0017c4790) Data frame received for 3 I0503 12:26:15.521774 6 log.go:172] (0xc000b737c0) (3) Data frame handling I0503 12:26:15.521845 6 log.go:172] (0xc000b737c0) (3) Data frame sent I0503 12:26:15.521937 6 log.go:172] (0xc0017c4790) Data frame received for 5 I0503 12:26:15.521971 6 log.go:172] (0xc001a123c0) (5) Data frame handling I0503 12:26:15.522005 6 log.go:172] (0xc0017c4790) Data frame received for 3 I0503 12:26:15.522025 6 log.go:172] (0xc000b737c0) (3) Data frame handling I0503 12:26:15.523240 6 log.go:172] (0xc0017c4790) Data frame received for 1 I0503 12:26:15.523263 6 log.go:172] (0xc0028de0a0) (1) Data frame handling I0503 12:26:15.523273 6 log.go:172] (0xc0028de0a0) (1) Data frame sent I0503 12:26:15.523281 6 log.go:172] (0xc0017c4790) (0xc0028de0a0) Stream removed, broadcasting: 1 I0503 12:26:15.523350 6 log.go:172] (0xc0017c4790) Go away received I0503 12:26:15.523408 6 log.go:172] (0xc0017c4790) (0xc0028de0a0) Stream removed, broadcasting: 1 I0503 12:26:15.523436 6 log.go:172] (0xc0017c4790) (0xc000b737c0) Stream removed, broadcasting: 3 I0503 12:26:15.523457 6 log.go:172] (0xc0017c4790) (0xc001a123c0) Stream removed, broadcasting: 5 May 3 12:26:15.523: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 3 12:26:15.523: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:26:15.523: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:15.559031 6 log.go:172] (0xc000da0630) (0xc001a12640) Create stream I0503 12:26:15.559060 6 log.go:172] (0xc000da0630) (0xc001a12640) Stream added, broadcasting: 1 I0503 12:26:15.569706 6 log.go:172] (0xc000da0630) Reply frame received for 1 I0503 12:26:15.569832 6 log.go:172] (0xc000da0630) (0xc001a126e0) Create stream I0503 12:26:15.569856 6 log.go:172] (0xc000da0630) (0xc001a126e0) Stream added, broadcasting: 3 I0503 12:26:15.571095 6 log.go:172] (0xc000da0630) Reply frame received for 3 I0503 12:26:15.571147 6 log.go:172] (0xc000da0630) (0xc0025ec000) Create stream I0503 12:26:15.571162 6 log.go:172] (0xc000da0630) (0xc0025ec000) Stream added, broadcasting: 5 I0503 12:26:15.572087 6 log.go:172] (0xc000da0630) Reply frame received for 5 I0503 12:26:15.634035 6 log.go:172] (0xc000da0630) Data frame received for 3 I0503 12:26:15.634085 6 log.go:172] (0xc001a126e0) (3) Data frame handling I0503 12:26:15.634108 6 log.go:172] (0xc001a126e0) (3) Data frame sent I0503 12:26:15.634129 6 log.go:172] (0xc000da0630) Data frame received for 5 I0503 12:26:15.634149 6 log.go:172] (0xc0025ec000) (5) Data frame handling I0503 12:26:15.634178 6 log.go:172] (0xc000da0630) Data frame received for 3 I0503 12:26:15.634205 6 log.go:172] (0xc001a126e0) (3) Data frame handling I0503 12:26:15.635854 6 log.go:172] (0xc000da0630) Data frame received for 1 I0503 12:26:15.635884 6 log.go:172] (0xc001a12640) (1) Data frame handling I0503 12:26:15.635902 6 log.go:172] (0xc001a12640) (1) Data frame sent I0503 12:26:15.635924 6 log.go:172] (0xc000da0630) (0xc001a12640) Stream removed, broadcasting: 1 I0503 12:26:15.635948 6 log.go:172] (0xc000da0630) Go away received I0503 12:26:15.636134 6 log.go:172] (0xc000da0630) (0xc001a12640) Stream removed, broadcasting: 1 I0503 12:26:15.636160 6 log.go:172] (0xc000da0630) (0xc001a126e0) Stream removed, broadcasting: 3 I0503 12:26:15.636178 6 log.go:172] (0xc000da0630) (0xc0025ec000) Stream removed, broadcasting: 5 May 3 12:26:15.636: INFO: Exec stderr: "" May 3 12:26:15.636: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:26:15.636: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:15.669591 6 log.go:172] (0xc000c24630) (0xc002aea1e0) Create stream I0503 12:26:15.669625 6 log.go:172] (0xc000c24630) (0xc002aea1e0) Stream added, broadcasting: 1 I0503 12:26:15.671876 6 log.go:172] (0xc000c24630) Reply frame received for 1 I0503 12:26:15.671919 6 log.go:172] (0xc000c24630) (0xc0025ec140) Create stream I0503 12:26:15.671934 6 log.go:172] (0xc000c24630) (0xc0025ec140) Stream added, broadcasting: 3 I0503 12:26:15.672808 6 log.go:172] (0xc000c24630) Reply frame received for 3 I0503 12:26:15.672859 6 log.go:172] (0xc000c24630) (0xc0018d20a0) Create stream I0503 12:26:15.672888 6 log.go:172] (0xc000c24630) (0xc0018d20a0) Stream added, broadcasting: 5 I0503 12:26:15.673991 6 log.go:172] (0xc000c24630) Reply frame received for 5 I0503 12:26:15.732605 6 log.go:172] (0xc000c24630) Data frame received for 3 I0503 12:26:15.732659 6 log.go:172] (0xc0025ec140) (3) Data frame handling I0503 12:26:15.732682 6 log.go:172] (0xc0025ec140) (3) Data frame sent I0503 12:26:15.732697 6 log.go:172] (0xc000c24630) Data frame received for 3 I0503 12:26:15.732710 6 log.go:172] (0xc0025ec140) (3) Data frame handling I0503 12:26:15.732751 6 log.go:172] (0xc000c24630) Data frame received for 5 I0503 12:26:15.732777 6 log.go:172] (0xc0018d20a0) (5) Data frame handling I0503 12:26:15.734949 6 log.go:172] (0xc000c24630) Data frame received for 1 I0503 12:26:15.735011 6 log.go:172] (0xc002aea1e0) (1) Data frame handling I0503 12:26:15.735042 6 log.go:172] (0xc002aea1e0) (1) Data frame sent I0503 12:26:15.735071 6 log.go:172] (0xc000c24630) (0xc002aea1e0) Stream removed, broadcasting: 1 I0503 12:26:15.735094 6 log.go:172] (0xc000c24630) Go away received I0503 12:26:15.735476 6 log.go:172] (0xc000c24630) (0xc002aea1e0) Stream removed, broadcasting: 1 I0503 12:26:15.735504 6 log.go:172] (0xc000c24630) (0xc0025ec140) Stream removed, broadcasting: 3 I0503 12:26:15.735522 6 log.go:172] (0xc000c24630) (0xc0018d20a0) Stream removed, broadcasting: 5 May 3 12:26:15.735: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 3 12:26:15.735: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:26:15.735: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:15.767593 6 log.go:172] (0xc000da04d0) (0xc0025ec3c0) Create stream I0503 12:26:15.767644 6 log.go:172] (0xc000da04d0) (0xc0025ec3c0) Stream added, broadcasting: 1 I0503 12:26:15.769956 6 log.go:172] (0xc000da04d0) Reply frame received for 1 I0503 12:26:15.769996 6 log.go:172] (0xc000da04d0) (0xc0007e8000) Create stream I0503 12:26:15.770011 6 log.go:172] (0xc000da04d0) (0xc0007e8000) Stream added, broadcasting: 3 I0503 12:26:15.770996 6 log.go:172] (0xc000da04d0) Reply frame received for 3 I0503 12:26:15.771053 6 log.go:172] (0xc000da04d0) (0xc0007e8280) Create stream I0503 12:26:15.771068 6 log.go:172] (0xc000da04d0) (0xc0007e8280) Stream added, broadcasting: 5 I0503 12:26:15.772003 6 log.go:172] (0xc000da04d0) Reply frame received for 5 I0503 12:26:15.818572 6 log.go:172] (0xc000da04d0) Data frame received for 5 I0503 12:26:15.818615 6 log.go:172] (0xc0007e8280) (5) Data frame handling I0503 12:26:15.818654 6 log.go:172] (0xc000da04d0) Data frame received for 3 I0503 12:26:15.818667 6 log.go:172] (0xc0007e8000) (3) Data frame handling I0503 12:26:15.818682 6 log.go:172] (0xc0007e8000) (3) Data frame sent I0503 12:26:15.818694 6 log.go:172] (0xc000da04d0) Data frame received for 3 I0503 12:26:15.818699 6 log.go:172] (0xc0007e8000) (3) Data frame handling I0503 12:26:15.820019 6 log.go:172] (0xc000da04d0) Data frame received for 1 I0503 12:26:15.820049 6 log.go:172] (0xc0025ec3c0) (1) Data frame handling I0503 12:26:15.820082 6 log.go:172] (0xc0025ec3c0) (1) Data frame sent I0503 12:26:15.820103 6 log.go:172] (0xc000da04d0) (0xc0025ec3c0) Stream removed, broadcasting: 1 I0503 12:26:15.820188 6 log.go:172] (0xc000da04d0) Go away received I0503 12:26:15.820237 6 log.go:172] (0xc000da04d0) (0xc0025ec3c0) Stream removed, broadcasting: 1 I0503 12:26:15.820277 6 log.go:172] (0xc000da04d0) (0xc0007e8000) Stream removed, broadcasting: 3 I0503 12:26:15.820304 6 log.go:172] (0xc000da04d0) (0xc0007e8280) Stream removed, broadcasting: 5 May 3 12:26:15.820: INFO: Exec stderr: "" May 3 12:26:15.820: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:26:15.820: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:15.871196 6 log.go:172] (0xc000c24b00) (0xc002aea3c0) Create stream I0503 12:26:15.871228 6 log.go:172] (0xc000c24b00) (0xc002aea3c0) Stream added, broadcasting: 1 I0503 12:26:15.873435 6 log.go:172] (0xc000c24b00) Reply frame received for 1 I0503 12:26:15.873482 6 log.go:172] (0xc000c24b00) (0xc000bfa1e0) Create stream I0503 12:26:15.873494 6 log.go:172] (0xc000c24b00) (0xc000bfa1e0) Stream added, broadcasting: 3 I0503 12:26:15.874528 6 log.go:172] (0xc000c24b00) Reply frame received for 3 I0503 12:26:15.874562 6 log.go:172] (0xc000c24b00) (0xc0018d2140) Create stream I0503 12:26:15.874572 6 log.go:172] (0xc000c24b00) (0xc0018d2140) Stream added, broadcasting: 5 I0503 12:26:15.875388 6 log.go:172] (0xc000c24b00) Reply frame received for 5 I0503 12:26:15.930949 6 log.go:172] (0xc000c24b00) Data frame received for 5 I0503 12:26:15.931009 6 log.go:172] (0xc0018d2140) (5) Data frame handling I0503 12:26:15.931076 6 log.go:172] (0xc000c24b00) Data frame received for 3 I0503 12:26:15.931121 6 log.go:172] (0xc000bfa1e0) (3) Data frame handling I0503 12:26:15.931144 6 log.go:172] (0xc000bfa1e0) (3) Data frame sent I0503 12:26:15.931160 6 log.go:172] (0xc000c24b00) Data frame received for 3 I0503 12:26:15.931172 6 log.go:172] (0xc000bfa1e0) (3) Data frame handling I0503 12:26:15.932993 6 log.go:172] (0xc000c24b00) Data frame received for 1 I0503 12:26:15.933029 6 log.go:172] (0xc002aea3c0) (1) Data frame handling I0503 12:26:15.933056 6 log.go:172] (0xc002aea3c0) (1) Data frame sent I0503 12:26:15.933091 6 log.go:172] (0xc000c24b00) (0xc002aea3c0) Stream removed, broadcasting: 1 I0503 12:26:15.933376 6 log.go:172] (0xc000c24b00) Go away received I0503 12:26:15.933413 6 log.go:172] (0xc000c24b00) (0xc002aea3c0) Stream removed, broadcasting: 1 I0503 12:26:15.933445 6 log.go:172] (0xc000c24b00) (0xc000bfa1e0) Stream removed, broadcasting: 3 I0503 12:26:15.933460 6 log.go:172] (0xc000c24b00) (0xc0018d2140) Stream removed, broadcasting: 5 May 3 12:26:15.933: INFO: Exec stderr: "" May 3 12:26:15.933: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:26:15.933: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:15.965400 6 log.go:172] (0xc0000eaf20) (0xc000bfa820) Create stream I0503 12:26:15.965435 6 log.go:172] (0xc0000eaf20) (0xc000bfa820) Stream added, broadcasting: 1 I0503 12:26:15.967844 6 log.go:172] (0xc0000eaf20) Reply frame received for 1 I0503 12:26:15.967876 6 log.go:172] (0xc0000eaf20) (0xc0018d21e0) Create stream I0503 12:26:15.967886 6 log.go:172] (0xc0000eaf20) (0xc0018d21e0) Stream added, broadcasting: 3 I0503 12:26:15.968931 6 log.go:172] (0xc0000eaf20) Reply frame received for 3 I0503 12:26:15.968975 6 log.go:172] (0xc0000eaf20) (0xc0007e8460) Create stream I0503 12:26:15.968990 6 log.go:172] (0xc0000eaf20) (0xc0007e8460) Stream added, broadcasting: 5 I0503 12:26:15.970223 6 log.go:172] (0xc0000eaf20) Reply frame received for 5 I0503 12:26:16.023278 6 log.go:172] (0xc0000eaf20) Data frame received for 5 I0503 12:26:16.023334 6 log.go:172] (0xc0007e8460) (5) Data frame handling I0503 12:26:16.023383 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0503 12:26:16.023405 6 log.go:172] (0xc0018d21e0) (3) Data frame handling I0503 12:26:16.023437 6 log.go:172] (0xc0018d21e0) (3) Data frame sent I0503 12:26:16.023458 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0503 12:26:16.023475 6 log.go:172] (0xc0018d21e0) (3) Data frame handling I0503 12:26:16.024856 6 log.go:172] (0xc0000eaf20) Data frame received for 1 I0503 12:26:16.024884 6 log.go:172] (0xc000bfa820) (1) Data frame handling I0503 12:26:16.024900 6 log.go:172] (0xc000bfa820) (1) Data frame sent I0503 12:26:16.024963 6 log.go:172] (0xc0000eaf20) (0xc000bfa820) Stream removed, broadcasting: 1 I0503 12:26:16.025028 6 log.go:172] (0xc0000eaf20) Go away received I0503 12:26:16.025079 6 log.go:172] (0xc0000eaf20) (0xc000bfa820) Stream removed, broadcasting: 1 I0503 12:26:16.025102 6 log.go:172] (0xc0000eaf20) (0xc0018d21e0) Stream removed, broadcasting: 3 I0503 12:26:16.025273 6 log.go:172] (0xc0000eaf20) (0xc0007e8460) Stream removed, broadcasting: 5 May 3 12:26:16.025: INFO: Exec stderr: "" May 3 12:26:16.025: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7bjt2 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 3 12:26:16.025: INFO: >>> kubeConfig: /root/.kube/config I0503 12:26:16.054444 6 log.go:172] (0xc001c282c0) (0xc0007e8780) Create stream I0503 12:26:16.054473 6 log.go:172] (0xc001c282c0) (0xc0007e8780) Stream added, broadcasting: 1 I0503 12:26:16.056501 6 log.go:172] (0xc001c282c0) Reply frame received for 1 I0503 12:26:16.056542 6 log.go:172] (0xc001c282c0) (0xc0018d2280) Create stream I0503 12:26:16.056555 6 log.go:172] (0xc001c282c0) (0xc0018d2280) Stream added, broadcasting: 3 I0503 12:26:16.057540 6 log.go:172] (0xc001c282c0) Reply frame received for 3 I0503 12:26:16.057570 6 log.go:172] (0xc001c282c0) (0xc0018d2320) Create stream I0503 12:26:16.057581 6 log.go:172] (0xc001c282c0) (0xc0018d2320) Stream added, broadcasting: 5 I0503 12:26:16.058408 6 log.go:172] (0xc001c282c0) Reply frame received for 5 I0503 12:26:16.128823 6 log.go:172] (0xc001c282c0) Data frame received for 5 I0503 12:26:16.128861 6 log.go:172] (0xc0018d2320) (5) Data frame handling I0503 12:26:16.128886 6 log.go:172] (0xc001c282c0) Data frame received for 3 I0503 12:26:16.128899 6 log.go:172] (0xc0018d2280) (3) Data frame handling I0503 12:26:16.128911 6 log.go:172] (0xc0018d2280) (3) Data frame sent I0503 12:26:16.128928 6 log.go:172] (0xc001c282c0) Data frame received for 3 I0503 12:26:16.128937 6 log.go:172] (0xc0018d2280) (3) Data frame handling I0503 12:26:16.130717 6 log.go:172] (0xc001c282c0) Data frame received for 1 I0503 12:26:16.130747 6 log.go:172] (0xc0007e8780) (1) Data frame handling I0503 12:26:16.130777 6 log.go:172] (0xc0007e8780) (1) Data frame sent I0503 12:26:16.130817 6 log.go:172] (0xc001c282c0) (0xc0007e8780) Stream removed, broadcasting: 1 I0503 12:26:16.130843 6 log.go:172] (0xc001c282c0) Go away received I0503 12:26:16.130928 6 log.go:172] (0xc001c282c0) (0xc0007e8780) Stream removed, broadcasting: 1 I0503 12:26:16.130960 6 log.go:172] (0xc001c282c0) (0xc0018d2280) Stream removed, broadcasting: 3 I0503 12:26:16.130974 6 log.go:172] (0xc001c282c0) (0xc0018d2320) Stream removed, broadcasting: 5 May 3 12:26:16.130: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:26:16.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-7bjt2" for this suite. May 3 12:27:06.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:27:06.206: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-7bjt2, resource: bindings, ignored listing per whitelist May 3 12:27:06.217: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-7bjt2 deletion completed in 50.082956041s • [SLOW TEST:61.349 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:27:06.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 3 12:27:10.859: INFO: Successfully updated pod "annotationupdate66a77fb9-8d39-11ea-b78d-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:27:12.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8r5ns" for this suite. May 3 12:27:34.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:27:34.943: INFO: namespace: e2e-tests-projected-8r5ns, resource: bindings, ignored listing per whitelist May 3 12:27:35.067: INFO: namespace e2e-tests-projected-8r5ns deletion completed in 22.146715728s • [SLOW TEST:28.849 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:27:35.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 3 12:27:35.173: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix342157159/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:27:35.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kmmwl" for this suite. May 3 12:27:41.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:27:41.366: INFO: namespace: e2e-tests-kubectl-kmmwl, resource: bindings, ignored listing per whitelist May 3 12:27:41.381: INFO: namespace e2e-tests-kubectl-kmmwl deletion completed in 6.110512378s • [SLOW TEST:6.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:27:41.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-7b9ab7af-8d39-11ea-b78d-0242ac110017 May 3 12:27:41.497: INFO: Pod name my-hostname-basic-7b9ab7af-8d39-11ea-b78d-0242ac110017: Found 0 pods out of 1 May 3 12:27:46.502: INFO: Pod name my-hostname-basic-7b9ab7af-8d39-11ea-b78d-0242ac110017: Found 1 pods out of 1 May 3 12:27:46.502: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7b9ab7af-8d39-11ea-b78d-0242ac110017" are running May 3 12:27:46.506: INFO: Pod "my-hostname-basic-7b9ab7af-8d39-11ea-b78d-0242ac110017-6r9sw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-03 12:27:41 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-03 12:27:44 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-03 12:27:44 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-03 12:27:41 +0000 UTC Reason: Message:}]) May 3 12:27:46.506: INFO: Trying to dial the pod May 3 12:27:51.518: INFO: Controller my-hostname-basic-7b9ab7af-8d39-11ea-b78d-0242ac110017: Got expected result from replica 1 [my-hostname-basic-7b9ab7af-8d39-11ea-b78d-0242ac110017-6r9sw]: "my-hostname-basic-7b9ab7af-8d39-11ea-b78d-0242ac110017-6r9sw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:27:51.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-lqrh4" for this suite. May 3 12:27:57.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:27:57.600: INFO: namespace: e2e-tests-replication-controller-lqrh4, resource: bindings, ignored listing per whitelist May 3 12:27:57.637: INFO: namespace e2e-tests-replication-controller-lqrh4 deletion completed in 6.114874498s • [SLOW TEST:16.255 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:27:57.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 3 12:27:57.801: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85571578-8d39-11ea-b78d-0242ac110017" in namespace "e2e-tests-downward-api-9c8kc" to be "success or failure" May 3 12:27:57.822: INFO: Pod "downwardapi-volume-85571578-8d39-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 20.530166ms May 3 12:27:59.825: INFO: Pod "downwardapi-volume-85571578-8d39-11ea-b78d-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024265279s May 3 12:28:01.830: INFO: Pod "downwardapi-volume-85571578-8d39-11ea-b78d-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028482487s STEP: Saw pod success May 3 12:28:01.830: INFO: Pod "downwardapi-volume-85571578-8d39-11ea-b78d-0242ac110017" satisfied condition "success or failure" May 3 12:28:01.832: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-85571578-8d39-11ea-b78d-0242ac110017 container client-container: STEP: delete the pod May 3 12:28:01.888: INFO: Waiting for pod downwardapi-volume-85571578-8d39-11ea-b78d-0242ac110017 to disappear May 3 12:28:01.895: INFO: Pod downwardapi-volume-85571578-8d39-11ea-b78d-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:28:01.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9c8kc" for this suite. May 3 12:28:07.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:28:07.920: INFO: namespace: e2e-tests-downward-api-9c8kc, resource: bindings, ignored listing per whitelist May 3 12:28:07.979: INFO: namespace e2e-tests-downward-api-9c8kc deletion completed in 6.081288895s • [SLOW TEST:10.342 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:28:07.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 3 12:28:08.067: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:28:08.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d8fwn" for this suite. May 3 12:28:14.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:28:14.178: INFO: namespace: e2e-tests-kubectl-d8fwn, resource: bindings, ignored listing per whitelist May 3 12:28:14.268: INFO: namespace e2e-tests-kubectl-d8fwn deletion completed in 6.110302864s • [SLOW TEST:6.289 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:28:14.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 3 12:28:14.386: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.018055ms) May 3 12:28:14.390: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.076823ms) May 3 12:28:14.393: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.054192ms) May 3 12:28:14.396: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.499581ms) May 3 12:28:14.399: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.780063ms) May 3 12:28:14.402: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.846133ms) May 3 12:28:14.405: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.027739ms) May 3 12:28:14.408: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.918479ms) May 3 12:28:14.411: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.925922ms) May 3 12:28:14.414: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.220814ms) May 3 12:28:14.417: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.235667ms) May 3 12:28:14.421: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.676887ms) May 3 12:28:14.424: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.123532ms) May 3 12:28:14.427: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.123389ms) May 3 12:28:14.430: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.66172ms) May 3 12:28:14.433: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.9165ms) May 3 12:28:14.436: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.772486ms) May 3 12:28:14.438: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.848008ms) May 3 12:28:14.441: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.934592ms) May 3 12:28:14.487: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 45.500084ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:28:14.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-v5qd2" for this suite. May 3 12:28:20.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:28:20.543: INFO: namespace: e2e-tests-proxy-v5qd2, resource: bindings, ignored listing per whitelist May 3 12:28:20.587: INFO: namespace e2e-tests-proxy-v5qd2 deletion completed in 6.096227305s • [SLOW TEST:6.319 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 3 12:28:20.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 3 12:28:20.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:20.981: INFO: stderr: "" May 3 12:28:20.981: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 3 12:28:20.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:21.089: INFO: stderr: "" May 3 12:28:21.089: INFO: stdout: "update-demo-nautilus-glwmp update-demo-nautilus-kmfzb " May 3 12:28:21.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glwmp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:21.210: INFO: stderr: "" May 3 12:28:21.210: INFO: stdout: "" May 3 12:28:21.210: INFO: update-demo-nautilus-glwmp is created but not running May 3 12:28:26.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:26.313: INFO: stderr: "" May 3 12:28:26.313: INFO: stdout: "update-demo-nautilus-glwmp update-demo-nautilus-kmfzb " May 3 12:28:26.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glwmp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:26.406: INFO: stderr: "" May 3 12:28:26.406: INFO: stdout: "true" May 3 12:28:26.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glwmp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:26.511: INFO: stderr: "" May 3 12:28:26.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 3 12:28:26.511: INFO: validating pod update-demo-nautilus-glwmp May 3 12:28:26.515: INFO: got data: { "image": "nautilus.jpg" } May 3 12:28:26.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 3 12:28:26.515: INFO: update-demo-nautilus-glwmp is verified up and running May 3 12:28:26.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kmfzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:26.614: INFO: stderr: "" May 3 12:28:26.614: INFO: stdout: "true" May 3 12:28:26.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kmfzb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:26.713: INFO: stderr: "" May 3 12:28:26.713: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 3 12:28:26.713: INFO: validating pod update-demo-nautilus-kmfzb May 3 12:28:26.717: INFO: got data: { "image": "nautilus.jpg" } May 3 12:28:26.717: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 3 12:28:26.717: INFO: update-demo-nautilus-kmfzb is verified up and running STEP: using delete to clean up resources May 3 12:28:26.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:26.826: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 3 12:28:26.826: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 3 12:28:26.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ssrz4' May 3 12:28:26.929: INFO: stderr: "No resources found.\n" May 3 12:28:26.929: INFO: stdout: "" May 3 12:28:26.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ssrz4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 3 12:28:27.128: INFO: stderr: "" May 3 12:28:27.128: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 3 12:28:27.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ssrz4" for this suite. May 3 12:28:33.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 3 12:28:33.334: INFO: namespace: e2e-tests-kubectl-ssrz4, resource: bindings, ignored listing per whitelist May 3 12:28:33.354: INFO: namespace e2e-tests-kubectl-ssrz4 deletion completed in 6.175221337s • [SLOW TEST:12.766 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSMay 3 12:28:33.354: INFO: Running AfterSuite actions on all nodes May 3 12:28:33.354: INFO: Running AfterSuite actions on node 1 May 3 12:28:33.354: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6109.454 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS