I0608 10:54:38.324702 6 e2e.go:224] Starting e2e run "7261e210-a976-11ea-978f-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591613677 - Will randomize all specs Will run 201 of 2164 specs Jun 8 10:54:38.512: INFO: >>> kubeConfig: /root/.kube/config Jun 8 10:54:38.515: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 8 10:54:38.530: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 8 10:54:38.564: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 8 10:54:38.564: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 8 10:54:38.564: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 8 10:54:38.583: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 8 10:54:38.583: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 8 10:54:38.583: INFO: e2e test version: v1.13.12 Jun 8 10:54:38.584: INFO: kube-apiserver version: v1.13.12 SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 10:54:38.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Jun 8 10:54:39.823: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 8 10:54:39.841: INFO: Waiting up to 5m0s for pod "pod-738f9087-a976-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-9bs8r" to be "success or failure" Jun 8 10:54:39.900: INFO: Pod "pod-738f9087-a976-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.497588ms Jun 8 10:54:42.195: INFO: Pod "pod-738f9087-a976-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353432829s Jun 8 10:54:44.199: INFO: Pod "pod-738f9087-a976-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.357838728s Jun 8 10:54:46.203: INFO: Pod "pod-738f9087-a976-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.361910964s STEP: Saw pod success Jun 8 10:54:46.203: INFO: Pod "pod-738f9087-a976-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 10:54:46.206: INFO: Trying to get logs from node hunter-worker pod pod-738f9087-a976-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 10:54:46.269: INFO: Waiting for pod pod-738f9087-a976-11ea-978f-0242ac110018 to disappear Jun 8 10:54:46.446: INFO: Pod pod-738f9087-a976-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 10:54:46.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9bs8r" for this suite. Jun 8 10:54:52.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 10:54:52.491: INFO: namespace: e2e-tests-emptydir-9bs8r, resource: bindings, ignored listing per whitelist Jun 8 10:54:52.566: INFO: namespace e2e-tests-emptydir-9bs8r deletion completed in 6.114890461s • [SLOW TEST:13.982 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 10:54:52.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 8 10:54:52.675: INFO: namespace e2e-tests-kubectl-bn76t Jun 8 10:54:52.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bn76t' Jun 8 10:54:55.120: INFO: stderr: "" Jun 8 10:54:55.120: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 8 10:54:56.124: INFO: Selector matched 1 pods for map[app:redis] Jun 8 10:54:56.124: INFO: Found 0 / 1 Jun 8 10:54:57.297: INFO: Selector matched 1 pods for map[app:redis] Jun 8 10:54:57.297: INFO: Found 0 / 1 Jun 8 10:54:58.135: INFO: Selector matched 1 pods for map[app:redis] Jun 8 10:54:58.135: INFO: Found 0 / 1 Jun 8 10:54:59.135: INFO: Selector matched 1 pods for map[app:redis] Jun 8 10:54:59.135: INFO: Found 1 / 1 Jun 8 10:54:59.135: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 8 10:54:59.137: INFO: Selector matched 1 pods for map[app:redis] Jun 8 10:54:59.137: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 8 10:54:59.137: INFO: wait on redis-master startup in e2e-tests-kubectl-bn76t Jun 8 10:54:59.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6dknt redis-master --namespace=e2e-tests-kubectl-bn76t' Jun 8 10:54:59.239: INFO: stderr: "" Jun 8 10:54:59.239: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 Jun 10:54:58.079 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Jun 10:54:58.079 # Server started, Redis version 3.2.12\n1:M 08 Jun 10:54:58.079 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Jun 10:54:58.079 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 8 10:54:59.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-bn76t' Jun 8 10:54:59.428: INFO: stderr: "" Jun 8 10:54:59.428: INFO: stdout: "service/rm2 exposed\n" Jun 8 10:54:59.463: INFO: Service rm2 in namespace e2e-tests-kubectl-bn76t found. STEP: exposing service Jun 8 10:55:01.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-bn76t' Jun 8 10:55:01.647: INFO: stderr: "" Jun 8 10:55:01.647: INFO: stdout: "service/rm3 exposed\n" Jun 8 10:55:01.649: INFO: Service rm3 in namespace e2e-tests-kubectl-bn76t found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 10:55:03.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bn76t" for this suite. Jun 8 10:55:27.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 10:55:28.361: INFO: namespace: e2e-tests-kubectl-bn76t, resource: bindings, ignored listing per whitelist Jun 8 10:55:28.370: INFO: namespace e2e-tests-kubectl-bn76t deletion completed in 24.712572209s • [SLOW TEST:35.804 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 10:55:28.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 10:55:48.970: INFO: Container started at 2020-06-08 10:55:31 +0000 UTC, pod became ready at 2020-06-08 10:55:48 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 10:55:48.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-f68sj" for this suite. Jun 8 10:56:11.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 10:56:11.034: INFO: namespace: e2e-tests-container-probe-f68sj, resource: bindings, ignored listing per whitelist Jun 8 10:56:11.085: INFO: namespace e2e-tests-container-probe-f68sj deletion completed in 22.111373746s • [SLOW TEST:42.715 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 10:56:11.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 8 10:56:11.192: INFO: Waiting up to 5m0s for pod "pod-aa04450d-a976-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-tp2j9" to be "success or failure" Jun 8 10:56:11.237: INFO: Pod "pod-aa04450d-a976-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 45.281625ms Jun 8 10:56:13.241: INFO: Pod "pod-aa04450d-a976-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049036887s Jun 8 10:56:15.246: INFO: Pod "pod-aa04450d-a976-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053647986s STEP: Saw pod success Jun 8 10:56:15.246: INFO: Pod "pod-aa04450d-a976-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 10:56:15.248: INFO: Trying to get logs from node hunter-worker pod pod-aa04450d-a976-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 10:56:15.462: INFO: Waiting for pod pod-aa04450d-a976-11ea-978f-0242ac110018 to disappear Jun 8 10:56:15.502: INFO: Pod pod-aa04450d-a976-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 10:56:15.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tp2j9" for this suite. Jun 8 10:56:21.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 10:56:21.980: INFO: namespace: e2e-tests-emptydir-tp2j9, resource: bindings, ignored listing per whitelist Jun 8 10:56:22.049: INFO: namespace e2e-tests-emptydir-tp2j9 deletion completed in 6.543434058s • [SLOW TEST:10.964 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 10:56:22.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 10:56:22.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0a6117a-a976-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-lj5ck" to be "success or failure" Jun 8 10:56:22.430: INFO: Pod "downwardapi-volume-b0a6117a-a976-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.00891ms Jun 8 10:56:24.435: INFO: Pod "downwardapi-volume-b0a6117a-a976-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014536472s Jun 8 10:56:26.438: INFO: Pod "downwardapi-volume-b0a6117a-a976-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.01827546s Jun 8 10:56:28.514: INFO: Pod "downwardapi-volume-b0a6117a-a976-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09399076s STEP: Saw pod success Jun 8 10:56:28.514: INFO: Pod "downwardapi-volume-b0a6117a-a976-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 10:56:28.517: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b0a6117a-a976-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 10:56:28.611: INFO: Waiting for pod downwardapi-volume-b0a6117a-a976-11ea-978f-0242ac110018 to disappear Jun 8 10:56:28.675: INFO: Pod downwardapi-volume-b0a6117a-a976-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 10:56:28.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lj5ck" for this suite. Jun 8 10:56:34.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 10:56:34.813: INFO: namespace: e2e-tests-downward-api-lj5ck, resource: bindings, ignored listing per whitelist Jun 8 10:56:34.819: INFO: namespace e2e-tests-downward-api-lj5ck deletion completed in 6.140018825s • [SLOW TEST:12.769 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 10:56:34.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 10:56:34.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-v24l7" for this suite. Jun 8 10:56:56.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 10:56:57.026: INFO: namespace: e2e-tests-pods-v24l7, resource: bindings, ignored listing per whitelist Jun 8 10:56:57.068: INFO: namespace e2e-tests-pods-v24l7 deletion completed in 22.109644165s • [SLOW TEST:22.249 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 10:56:57.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 10:56:57.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c56ad5af-a976-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-74hmm" to be "success or failure" Jun 8 10:56:57.223: INFO: Pod "downwardapi-volume-c56ad5af-a976-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.31341ms Jun 8 10:56:59.604: INFO: Pod "downwardapi-volume-c56ad5af-a976-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40202853s Jun 8 10:57:01.608: INFO: Pod "downwardapi-volume-c56ad5af-a976-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.406339963s Jun 8 10:57:03.611: INFO: Pod "downwardapi-volume-c56ad5af-a976-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.409144529s STEP: Saw pod success Jun 8 10:57:03.611: INFO: Pod "downwardapi-volume-c56ad5af-a976-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 10:57:03.613: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c56ad5af-a976-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 10:57:03.714: INFO: Waiting for pod downwardapi-volume-c56ad5af-a976-11ea-978f-0242ac110018 to disappear Jun 8 10:57:03.730: INFO: Pod downwardapi-volume-c56ad5af-a976-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 10:57:03.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-74hmm" for this suite. Jun 8 10:57:09.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 10:57:09.794: INFO: namespace: e2e-tests-projected-74hmm, resource: bindings, ignored listing per whitelist Jun 8 10:57:09.834: INFO: namespace e2e-tests-projected-74hmm deletion completed in 6.099591904s • [SLOW TEST:12.766 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 10:57:09.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-8nprl Jun 8 10:57:14.165: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-8nprl STEP: checking the pod's current state and verifying that restartCount is present Jun 8 10:57:14.167: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:01:16.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8nprl" for this suite. Jun 8 11:01:22.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:01:22.180: INFO: namespace: e2e-tests-container-probe-8nprl, resource: bindings, ignored listing per whitelist Jun 8 11:01:22.328: INFO: namespace e2e-tests-container-probe-8nprl deletion completed in 6.202170097s • [SLOW TEST:252.495 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:01:22.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zj4nf STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 8 11:01:22.419: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 8 11:01:44.606: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.160:8080/dial?request=hostName&protocol=udp&host=10.244.1.159&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-zj4nf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 11:01:44.606: INFO: >>> kubeConfig: /root/.kube/config I0608 11:01:44.648455 6 log.go:172] (0xc0019bc2c0) (0xc001d35860) Create stream I0608 11:01:44.648509 6 log.go:172] (0xc0019bc2c0) (0xc001d35860) Stream added, broadcasting: 1 I0608 11:01:44.650381 6 log.go:172] (0xc0019bc2c0) Reply frame received for 1 I0608 11:01:44.650434 6 log.go:172] (0xc0019bc2c0) (0xc001d35900) Create stream I0608 11:01:44.650450 6 log.go:172] (0xc0019bc2c0) (0xc001d35900) Stream added, broadcasting: 3 I0608 11:01:44.651549 6 log.go:172] (0xc0019bc2c0) Reply frame received for 3 I0608 11:01:44.651596 6 log.go:172] (0xc0019bc2c0) (0xc001d359a0) Create stream I0608 11:01:44.651796 6 log.go:172] (0xc0019bc2c0) (0xc001d359a0) Stream added, broadcasting: 5 I0608 11:01:44.652749 6 log.go:172] (0xc0019bc2c0) Reply frame received for 5 I0608 11:01:44.717071 6 log.go:172] (0xc0019bc2c0) Data frame received for 3 I0608 11:01:44.717270 6 log.go:172] (0xc001d35900) (3) Data frame handling I0608 11:01:44.717306 6 log.go:172] (0xc001d35900) (3) Data frame sent I0608 11:01:44.718184 6 log.go:172] (0xc0019bc2c0) Data frame received for 5 I0608 11:01:44.718215 6 log.go:172] (0xc001d359a0) (5) Data frame handling I0608 11:01:44.718247 6 log.go:172] (0xc0019bc2c0) Data frame received for 3 I0608 11:01:44.718262 6 log.go:172] (0xc001d35900) (3) Data frame handling I0608 11:01:44.719947 6 log.go:172] (0xc0019bc2c0) Data frame received for 1 I0608 11:01:44.719985 6 log.go:172] (0xc001d35860) (1) Data frame handling I0608 11:01:44.720020 6 log.go:172] (0xc001d35860) (1) Data frame sent I0608 11:01:44.720055 6 log.go:172] (0xc0019bc2c0) (0xc001d35860) Stream removed, broadcasting: 1 I0608 11:01:44.720157 6 log.go:172] (0xc0019bc2c0) Go away received I0608 11:01:44.720265 6 log.go:172] (0xc0019bc2c0) (0xc001d35860) Stream removed, broadcasting: 1 I0608 11:01:44.720288 6 log.go:172] (0xc0019bc2c0) (0xc001d35900) Stream removed, broadcasting: 3 I0608 11:01:44.720300 6 log.go:172] (0xc0019bc2c0) (0xc001d359a0) Stream removed, broadcasting: 5 Jun 8 11:01:44.720: INFO: Waiting for endpoints: map[] Jun 8 11:01:44.723: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.160:8080/dial?request=hostName&protocol=udp&host=10.244.2.73&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-zj4nf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 11:01:44.723: INFO: >>> kubeConfig: /root/.kube/config I0608 11:01:44.754731 6 log.go:172] (0xc000ba82c0) (0xc000f0e460) Create stream I0608 11:01:44.754760 6 log.go:172] (0xc000ba82c0) (0xc000f0e460) Stream added, broadcasting: 1 I0608 11:01:44.756563 6 log.go:172] (0xc000ba82c0) Reply frame received for 1 I0608 11:01:44.756615 6 log.go:172] (0xc000ba82c0) (0xc000f0e500) Create stream I0608 11:01:44.756632 6 log.go:172] (0xc000ba82c0) (0xc000f0e500) Stream added, broadcasting: 3 I0608 11:01:44.757815 6 log.go:172] (0xc000ba82c0) Reply frame received for 3 I0608 11:01:44.757878 6 log.go:172] (0xc000ba82c0) (0xc00054c820) Create stream I0608 11:01:44.757906 6 log.go:172] (0xc000ba82c0) (0xc00054c820) Stream added, broadcasting: 5 I0608 11:01:44.758828 6 log.go:172] (0xc000ba82c0) Reply frame received for 5 I0608 11:01:44.853544 6 log.go:172] (0xc000ba82c0) Data frame received for 3 I0608 11:01:44.853581 6 log.go:172] (0xc000f0e500) (3) Data frame handling I0608 11:01:44.853604 6 log.go:172] (0xc000f0e500) (3) Data frame sent I0608 11:01:44.854686 6 log.go:172] (0xc000ba82c0) Data frame received for 5 I0608 11:01:44.854718 6 log.go:172] (0xc00054c820) (5) Data frame handling I0608 11:01:44.854755 6 log.go:172] (0xc000ba82c0) Data frame received for 3 I0608 11:01:44.854794 6 log.go:172] (0xc000f0e500) (3) Data frame handling I0608 11:01:44.856530 6 log.go:172] (0xc000ba82c0) Data frame received for 1 I0608 11:01:44.856554 6 log.go:172] (0xc000f0e460) (1) Data frame handling I0608 11:01:44.856567 6 log.go:172] (0xc000f0e460) (1) Data frame sent I0608 11:01:44.856588 6 log.go:172] (0xc000ba82c0) (0xc000f0e460) Stream removed, broadcasting: 1 I0608 11:01:44.856613 6 log.go:172] (0xc000ba82c0) Go away received I0608 11:01:44.856707 6 log.go:172] (0xc000ba82c0) (0xc000f0e460) Stream removed, broadcasting: 1 I0608 11:01:44.856733 6 log.go:172] (0xc000ba82c0) (0xc000f0e500) Stream removed, broadcasting: 3 I0608 11:01:44.856745 6 log.go:172] (0xc000ba82c0) (0xc00054c820) Stream removed, broadcasting: 5 Jun 8 11:01:44.856: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:01:44.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-zj4nf" for this suite. Jun 8 11:02:08.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:02:08.962: INFO: namespace: e2e-tests-pod-network-test-zj4nf, resource: bindings, ignored listing per whitelist Jun 8 11:02:09.048: INFO: namespace e2e-tests-pod-network-test-zj4nf deletion completed in 24.187163857s • [SLOW TEST:46.720 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:02:09.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7f5f59d4-a977-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:02:09.166: INFO: Waiting up to 5m0s for pod "pod-secrets-7f600dda-a977-11ea-978f-0242ac110018" in namespace "e2e-tests-secrets-nfw5k" to be "success or failure" Jun 8 11:02:09.195: INFO: Pod "pod-secrets-7f600dda-a977-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.965507ms Jun 8 11:02:11.198: INFO: Pod "pod-secrets-7f600dda-a977-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032134275s Jun 8 11:02:13.202: INFO: Pod "pod-secrets-7f600dda-a977-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.036384682s Jun 8 11:02:15.208: INFO: Pod "pod-secrets-7f600dda-a977-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041658023s STEP: Saw pod success Jun 8 11:02:15.208: INFO: Pod "pod-secrets-7f600dda-a977-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:02:15.212: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-7f600dda-a977-11ea-978f-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 8 11:02:15.277: INFO: Waiting for pod pod-secrets-7f600dda-a977-11ea-978f-0242ac110018 to disappear Jun 8 11:02:15.298: INFO: Pod pod-secrets-7f600dda-a977-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:02:15.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nfw5k" for this suite. Jun 8 11:02:21.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:02:21.390: INFO: namespace: e2e-tests-secrets-nfw5k, resource: bindings, ignored listing per whitelist Jun 8 11:02:21.456: INFO: namespace e2e-tests-secrets-nfw5k deletion completed in 6.154534728s • [SLOW TEST:12.408 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:02:21.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-86c7f136-a977-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:02:21.589: INFO: Waiting up to 5m0s for pod "pod-secrets-86ca448d-a977-11ea-978f-0242ac110018" in namespace "e2e-tests-secrets-kmf24" to be "success or failure" Jun 8 11:02:21.594: INFO: Pod "pod-secrets-86ca448d-a977-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.723946ms Jun 8 11:02:23.697: INFO: Pod "pod-secrets-86ca448d-a977-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108308907s Jun 8 11:02:25.702: INFO: Pod "pod-secrets-86ca448d-a977-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113187915s Jun 8 11:02:27.706: INFO: Pod "pod-secrets-86ca448d-a977-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11694983s STEP: Saw pod success Jun 8 11:02:27.706: INFO: Pod "pod-secrets-86ca448d-a977-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:02:27.709: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-86ca448d-a977-11ea-978f-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 8 11:02:27.783: INFO: Waiting for pod pod-secrets-86ca448d-a977-11ea-978f-0242ac110018 to disappear Jun 8 11:02:27.791: INFO: Pod pod-secrets-86ca448d-a977-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:02:27.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kmf24" for this suite. Jun 8 11:02:33.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:02:33.848: INFO: namespace: e2e-tests-secrets-kmf24, resource: bindings, ignored listing per whitelist Jun 8 11:02:33.891: INFO: namespace e2e-tests-secrets-kmf24 deletion completed in 6.096526814s • [SLOW TEST:12.433 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:02:33.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:02:34.035: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 8 11:02:34.053: INFO: Number of nodes with available pods: 0 Jun 8 11:02:34.053: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 8 11:02:34.093: INFO: Number of nodes with available pods: 0 Jun 8 11:02:34.093: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:35.098: INFO: Number of nodes with available pods: 0 Jun 8 11:02:35.098: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:36.097: INFO: Number of nodes with available pods: 0 Jun 8 11:02:36.097: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:37.098: INFO: Number of nodes with available pods: 0 Jun 8 11:02:37.098: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:38.098: INFO: Number of nodes with available pods: 1 Jun 8 11:02:38.098: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 8 11:02:38.171: INFO: Number of nodes with available pods: 1 Jun 8 11:02:38.171: INFO: Number of running nodes: 0, number of available pods: 1 Jun 8 11:02:39.176: INFO: Number of nodes with available pods: 0 Jun 8 11:02:39.176: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 8 11:02:39.191: INFO: Number of nodes with available pods: 0 Jun 8 11:02:39.191: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:40.196: INFO: Number of nodes with available pods: 0 Jun 8 11:02:40.196: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:41.195: INFO: Number of nodes with available pods: 0 Jun 8 11:02:41.195: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:42.210: INFO: Number of nodes with available pods: 0 Jun 8 11:02:42.210: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:43.196: INFO: Number of nodes with available pods: 0 Jun 8 11:02:43.196: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:44.196: INFO: Number of nodes with available pods: 0 Jun 8 11:02:44.196: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:45.196: INFO: Number of nodes with available pods: 0 Jun 8 11:02:45.196: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:46.195: INFO: Number of nodes with available pods: 0 Jun 8 11:02:46.195: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:47.196: INFO: Number of nodes with available pods: 0 Jun 8 11:02:47.196: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:48.196: INFO: Number of nodes with available pods: 0 Jun 8 11:02:48.196: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:49.196: INFO: Number of nodes with available pods: 0 Jun 8 11:02:49.196: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:50.196: INFO: Number of nodes with available pods: 0 Jun 8 11:02:50.196: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:51.195: INFO: Number of nodes with available pods: 0 Jun 8 11:02:51.195: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:52.207: INFO: Number of nodes with available pods: 0 Jun 8 11:02:52.207: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:53.195: INFO: Number of nodes with available pods: 0 Jun 8 11:02:53.195: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:54.195: INFO: Number of nodes with available pods: 0 Jun 8 11:02:54.195: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:02:55.196: INFO: Number of nodes with available pods: 1 Jun 8 11:02:55.196: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-klfhc, will wait for the garbage collector to delete the pods Jun 8 11:02:55.263: INFO: Deleting DaemonSet.extensions daemon-set took: 6.424811ms Jun 8 11:02:55.363: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.205229ms Jun 8 11:03:01.266: INFO: Number of nodes with available pods: 0 Jun 8 11:03:01.266: INFO: Number of running nodes: 0, number of available pods: 0 Jun 8 11:03:01.272: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-klfhc/daemonsets","resourceVersion":"14857945"},"items":null} Jun 8 11:03:01.274: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-klfhc/pods","resourceVersion":"14857945"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:03:01.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-klfhc" for this suite. Jun 8 11:03:07.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:03:07.394: INFO: namespace: e2e-tests-daemonsets-klfhc, resource: bindings, ignored listing per whitelist Jun 8 11:03:07.400: INFO: namespace e2e-tests-daemonsets-klfhc deletion completed in 6.092463566s • [SLOW TEST:33.509 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:03:07.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-a22df123-a977-11ea-978f-0242ac110018 STEP: Creating secret with name s-test-opt-upd-a22df17b-a977-11ea-978f-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a22df123-a977-11ea-978f-0242ac110018 STEP: Updating secret s-test-opt-upd-a22df17b-a977-11ea-978f-0242ac110018 STEP: Creating secret with name s-test-opt-create-a22df19e-a977-11ea-978f-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:03:17.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tzlph" for this suite. Jun 8 11:03:41.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:03:41.949: INFO: namespace: e2e-tests-secrets-tzlph, resource: bindings, ignored listing per whitelist Jun 8 11:03:41.975: INFO: namespace e2e-tests-secrets-tzlph deletion completed in 24.102178295s • [SLOW TEST:34.575 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:03:41.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jun 8 11:03:42.645: INFO: created pod pod-service-account-defaultsa Jun 8 11:03:42.645: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 8 11:03:42.795: INFO: created pod pod-service-account-mountsa Jun 8 11:03:42.795: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 8 11:03:42.933: INFO: created pod pod-service-account-nomountsa Jun 8 11:03:42.933: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 8 11:03:42.954: INFO: created pod pod-service-account-defaultsa-mountspec Jun 8 11:03:42.954: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 8 11:03:43.007: INFO: created pod pod-service-account-mountsa-mountspec Jun 8 11:03:43.007: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 8 11:03:43.076: INFO: created pod pod-service-account-nomountsa-mountspec Jun 8 11:03:43.076: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 8 11:03:43.097: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 8 11:03:43.097: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 8 11:03:43.139: INFO: created pod pod-service-account-mountsa-nomountspec Jun 8 11:03:43.139: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 8 11:03:43.507: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 8 11:03:43.507: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:03:43.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-995bk" for this suite. Jun 8 11:04:13.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:04:13.725: INFO: namespace: e2e-tests-svcaccounts-995bk, resource: bindings, ignored listing per whitelist Jun 8 11:04:13.759: INFO: namespace e2e-tests-svcaccounts-995bk deletion completed in 30.152258928s • [SLOW TEST:31.784 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:04:13.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 8 11:04:18.682: INFO: Successfully updated pod "pod-update-c9b27b40-a977-11ea-978f-0242ac110018" STEP: verifying the updated pod is in kubernetes Jun 8 11:04:18.696: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:04:18.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-z6ql9" for this suite. Jun 8 11:04:40.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:04:40.832: INFO: namespace: e2e-tests-pods-z6ql9, resource: bindings, ignored listing per whitelist Jun 8 11:04:40.864: INFO: namespace e2e-tests-pods-z6ql9 deletion completed in 22.165012812s • [SLOW TEST:27.105 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:04:40.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-829tp STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-829tp to expose endpoints map[] Jun 8 11:04:41.072: INFO: Get endpoints failed (17.785228ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 8 11:04:42.077: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-829tp exposes endpoints map[] (1.022421957s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-829tp STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-829tp to expose endpoints map[pod1:[80]] Jun 8 11:04:46.420: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-829tp exposes endpoints map[pod1:[80]] (4.336623185s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-829tp STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-829tp to expose endpoints map[pod1:[80] pod2:[80]] Jun 8 11:04:51.001: INFO: Unexpected endpoints: found map[da88bddb-a977-11ea-99e8-0242ac110002:[80]], expected map[pod1:[80] pod2:[80]] (4.570233411s elapsed, will retry) Jun 8 11:04:52.011: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-829tp exposes endpoints map[pod1:[80] pod2:[80]] (5.579782231s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-829tp STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-829tp to expose endpoints map[pod2:[80]] Jun 8 11:04:53.097: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-829tp exposes endpoints map[pod2:[80]] (1.082869065s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-829tp STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-829tp to expose endpoints map[] Jun 8 11:04:54.226: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-829tp exposes endpoints map[] (1.124343824s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:04:54.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-829tp" for this suite. Jun 8 11:05:16.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:05:16.402: INFO: namespace: e2e-tests-services-829tp, resource: bindings, ignored listing per whitelist Jun 8 11:05:16.453: INFO: namespace e2e-tests-services-829tp deletion completed in 22.091143509s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:35.588 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:05:16.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:05:16.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef11a567-a977-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-f9h6l" to be "success or failure" Jun 8 11:05:16.580: INFO: Pod "downwardapi-volume-ef11a567-a977-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.314041ms Jun 8 11:05:18.585: INFO: Pod "downwardapi-volume-ef11a567-a977-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037751785s Jun 8 11:05:20.589: INFO: Pod "downwardapi-volume-ef11a567-a977-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04261496s STEP: Saw pod success Jun 8 11:05:20.589: INFO: Pod "downwardapi-volume-ef11a567-a977-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:05:20.593: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ef11a567-a977-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:05:20.655: INFO: Waiting for pod downwardapi-volume-ef11a567-a977-11ea-978f-0242ac110018 to disappear Jun 8 11:05:20.755: INFO: Pod downwardapi-volume-ef11a567-a977-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:05:20.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f9h6l" for this suite. Jun 8 11:05:26.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:05:27.072: INFO: namespace: e2e-tests-projected-f9h6l, resource: bindings, ignored listing per whitelist Jun 8 11:05:27.078: INFO: namespace e2e-tests-projected-f9h6l deletion completed in 6.319336803s • [SLOW TEST:10.625 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:05:27.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 8 11:05:27.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-cbtfm' Jun 8 11:05:31.914: INFO: stderr: "" Jun 8 11:05:31.914: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jun 8 11:05:31.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-cbtfm' Jun 8 11:05:41.741: INFO: stderr: "" Jun 8 11:05:41.741: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:05:41.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cbtfm" for this suite. Jun 8 11:05:47.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:05:47.822: INFO: namespace: e2e-tests-kubectl-cbtfm, resource: bindings, ignored listing per whitelist Jun 8 11:05:47.870: INFO: namespace e2e-tests-kubectl-cbtfm deletion completed in 6.09956297s • [SLOW TEST:20.792 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:05:47.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-01e6a71d-a978-11ea-978f-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-01e6a71d-a978-11ea-978f-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:05:54.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sr52h" for this suite. Jun 8 11:06:16.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:06:16.375: INFO: namespace: e2e-tests-configmap-sr52h, resource: bindings, ignored listing per whitelist Jun 8 11:06:16.441: INFO: namespace e2e-tests-configmap-sr52h deletion completed in 22.100833429s • [SLOW TEST:28.571 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:06:16.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 8 11:06:16.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ktrgn' Jun 8 11:06:16.634: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 8 11:06:16.634: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 8 11:06:18.676: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qz5zg] Jun 8 11:06:18.676: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qz5zg" in namespace "e2e-tests-kubectl-ktrgn" to be "running and ready" Jun 8 11:06:18.679: INFO: Pod "e2e-test-nginx-rc-qz5zg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.974354ms Jun 8 11:06:20.791: INFO: Pod "e2e-test-nginx-rc-qz5zg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115361992s Jun 8 11:06:22.796: INFO: Pod "e2e-test-nginx-rc-qz5zg": Phase="Running", Reason="", readiness=true. Elapsed: 4.119867629s Jun 8 11:06:22.796: INFO: Pod "e2e-test-nginx-rc-qz5zg" satisfied condition "running and ready" Jun 8 11:06:22.796: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qz5zg] Jun 8 11:06:22.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ktrgn' Jun 8 11:06:23.071: INFO: stderr: "" Jun 8 11:06:23.071: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jun 8 11:06:23.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ktrgn' Jun 8 11:06:23.450: INFO: stderr: "" Jun 8 11:06:23.450: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:06:23.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ktrgn" for this suite. Jun 8 11:06:45.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:06:45.648: INFO: namespace: e2e-tests-kubectl-ktrgn, resource: bindings, ignored listing per whitelist Jun 8 11:06:45.717: INFO: namespace e2e-tests-kubectl-ktrgn deletion completed in 22.169340721s • [SLOW TEST:29.275 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:06:45.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:06:45.859: INFO: Waiting up to 5m0s for pod "downwardapi-volume-244ecad7-a978-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-28dj9" to be "success or failure" Jun 8 11:06:45.861: INFO: Pod "downwardapi-volume-244ecad7-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22927ms Jun 8 11:06:47.875: INFO: Pod "downwardapi-volume-244ecad7-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016148035s Jun 8 11:06:49.878: INFO: Pod "downwardapi-volume-244ecad7-a978-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019624134s STEP: Saw pod success Jun 8 11:06:49.878: INFO: Pod "downwardapi-volume-244ecad7-a978-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:06:49.881: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-244ecad7-a978-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:06:49.897: INFO: Waiting for pod downwardapi-volume-244ecad7-a978-11ea-978f-0242ac110018 to disappear Jun 8 11:06:49.920: INFO: Pod downwardapi-volume-244ecad7-a978-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:06:49.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-28dj9" for this suite. Jun 8 11:06:56.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:06:56.198: INFO: namespace: e2e-tests-projected-28dj9, resource: bindings, ignored listing per whitelist Jun 8 11:06:56.211: INFO: namespace e2e-tests-projected-28dj9 deletion completed in 6.28666907s • [SLOW TEST:10.494 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:06:56.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jun 8 11:06:56.439: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix053413521/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:06:56.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dc56l" for this suite. Jun 8 11:07:02.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:07:02.791: INFO: namespace: e2e-tests-kubectl-dc56l, resource: bindings, ignored listing per whitelist Jun 8 11:07:02.830: INFO: namespace e2e-tests-kubectl-dc56l deletion completed in 6.180990998s • [SLOW TEST:6.619 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:07:02.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 8 11:07:02.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9frxp' Jun 8 11:07:03.223: INFO: stderr: "" Jun 8 11:07:03.223: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 8 11:07:04.227: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:07:04.227: INFO: Found 0 / 1 Jun 8 11:07:05.227: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:07:05.227: INFO: Found 0 / 1 Jun 8 11:07:06.228: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:07:06.228: INFO: Found 0 / 1 Jun 8 11:07:07.227: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:07:07.227: INFO: Found 1 / 1 Jun 8 11:07:07.227: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 8 11:07:07.229: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:07:07.229: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 8 11:07:07.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-kr8vp --namespace=e2e-tests-kubectl-9frxp -p {"metadata":{"annotations":{"x":"y"}}}' Jun 8 11:07:07.345: INFO: stderr: "" Jun 8 11:07:07.345: INFO: stdout: "pod/redis-master-kr8vp patched\n" STEP: checking annotations Jun 8 11:07:07.403: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:07:07.403: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:07:07.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9frxp" for this suite. Jun 8 11:07:29.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:07:29.513: INFO: namespace: e2e-tests-kubectl-9frxp, resource: bindings, ignored listing per whitelist Jun 8 11:07:29.515: INFO: namespace e2e-tests-kubectl-9frxp deletion completed in 22.108250739s • [SLOW TEST:26.685 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:07:29.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 8 11:07:29.763: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jldc8,SelfLink:/api/v1/namespaces/e2e-tests-watch-jldc8/configmaps/e2e-watch-test-watch-closed,UID:3e73c5e9-a978-11ea-99e8-0242ac110002,ResourceVersion:14858874,Generation:0,CreationTimestamp:2020-06-08 11:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 8 11:07:29.763: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jldc8,SelfLink:/api/v1/namespaces/e2e-tests-watch-jldc8/configmaps/e2e-watch-test-watch-closed,UID:3e73c5e9-a978-11ea-99e8-0242ac110002,ResourceVersion:14858875,Generation:0,CreationTimestamp:2020-06-08 11:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 8 11:07:29.920: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jldc8,SelfLink:/api/v1/namespaces/e2e-tests-watch-jldc8/configmaps/e2e-watch-test-watch-closed,UID:3e73c5e9-a978-11ea-99e8-0242ac110002,ResourceVersion:14858876,Generation:0,CreationTimestamp:2020-06-08 11:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 8 11:07:29.920: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jldc8,SelfLink:/api/v1/namespaces/e2e-tests-watch-jldc8/configmaps/e2e-watch-test-watch-closed,UID:3e73c5e9-a978-11ea-99e8-0242ac110002,ResourceVersion:14858877,Generation:0,CreationTimestamp:2020-06-08 11:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:07:29.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-jldc8" for this suite. Jun 8 11:07:35.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:07:36.029: INFO: namespace: e2e-tests-watch-jldc8, resource: bindings, ignored listing per whitelist Jun 8 11:07:36.050: INFO: namespace e2e-tests-watch-jldc8 deletion completed in 6.122184782s • [SLOW TEST:6.535 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:07:36.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jun 8 11:07:36.149: INFO: Waiting up to 5m0s for pod "var-expansion-42486143-a978-11ea-978f-0242ac110018" in namespace "e2e-tests-var-expansion-hf9nj" to be "success or failure" Jun 8 11:07:36.166: INFO: Pod "var-expansion-42486143-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.480464ms Jun 8 11:07:38.170: INFO: Pod "var-expansion-42486143-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020760735s Jun 8 11:07:40.451: INFO: Pod "var-expansion-42486143-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30142462s Jun 8 11:07:42.456: INFO: Pod "var-expansion-42486143-a978-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.306130804s STEP: Saw pod success Jun 8 11:07:42.456: INFO: Pod "var-expansion-42486143-a978-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:07:42.459: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-42486143-a978-11ea-978f-0242ac110018 container dapi-container: STEP: delete the pod Jun 8 11:07:42.485: INFO: Waiting for pod var-expansion-42486143-a978-11ea-978f-0242ac110018 to disappear Jun 8 11:07:42.488: INFO: Pod var-expansion-42486143-a978-11ea-978f-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:07:42.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-hf9nj" for this suite. Jun 8 11:07:48.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:07:48.604: INFO: namespace: e2e-tests-var-expansion-hf9nj, resource: bindings, ignored listing per whitelist Jun 8 11:07:48.629: INFO: namespace e2e-tests-var-expansion-hf9nj deletion completed in 6.137072886s • [SLOW TEST:12.578 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:07:48.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-49cc0d3c-a978-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:07:48.762: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49ccbfbb-a978-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-ttdpw" to be "success or failure" Jun 8 11:07:48.776: INFO: Pod "pod-projected-secrets-49ccbfbb-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.347345ms Jun 8 11:07:50.779: INFO: Pod "pod-projected-secrets-49ccbfbb-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016956443s Jun 8 11:07:52.784: INFO: Pod "pod-projected-secrets-49ccbfbb-a978-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.021659844s Jun 8 11:07:54.789: INFO: Pod "pod-projected-secrets-49ccbfbb-a978-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026281684s STEP: Saw pod success Jun 8 11:07:54.789: INFO: Pod "pod-projected-secrets-49ccbfbb-a978-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:07:54.792: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-49ccbfbb-a978-11ea-978f-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 8 11:07:54.852: INFO: Waiting for pod pod-projected-secrets-49ccbfbb-a978-11ea-978f-0242ac110018 to disappear Jun 8 11:07:54.867: INFO: Pod pod-projected-secrets-49ccbfbb-a978-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:07:54.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ttdpw" for this suite. Jun 8 11:08:00.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:08:00.939: INFO: namespace: e2e-tests-projected-ttdpw, resource: bindings, ignored listing per whitelist Jun 8 11:08:00.964: INFO: namespace e2e-tests-projected-ttdpw deletion completed in 6.094104493s • [SLOW TEST:12.336 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:08:00.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 8 11:08:01.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-28wkf' Jun 8 11:08:01.237: INFO: stderr: "" Jun 8 11:08:01.237: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 8 11:08:06.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-28wkf -o json' Jun 8 11:08:06.397: INFO: stderr: "" Jun 8 11:08:06.397: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-08T11:08:01Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-28wkf\",\n \"resourceVersion\": \"14859012\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-28wkf/pods/e2e-test-nginx-pod\",\n \"uid\": \"513b4a53-a978-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9blw6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9blw6\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9blw6\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-08T11:08:01Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-08T11:08:05Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-08T11:08:05Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-08T11:08:01Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d27a2b0c13428c7563ee0cc6ff4d81b93a6e4f7d128c2f1929b255f81c6a9e84\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-08T11:08:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.171\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-08T11:08:01Z\"\n }\n}\n" STEP: replace the image in the pod Jun 8 11:08:06.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-28wkf' Jun 8 11:08:06.677: INFO: stderr: "" Jun 8 11:08:06.677: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jun 8 11:08:06.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-28wkf' Jun 8 11:08:21.262: INFO: stderr: "" Jun 8 11:08:21.262: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:08:21.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-28wkf" for this suite. Jun 8 11:08:27.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:08:27.300: INFO: namespace: e2e-tests-kubectl-28wkf, resource: bindings, ignored listing per whitelist Jun 8 11:08:27.339: INFO: namespace e2e-tests-kubectl-28wkf deletion completed in 6.074007824s • [SLOW TEST:26.374 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:08:27.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:08:27.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jun 8 11:08:27.483: INFO: stderr: "" Jun 8 11:08:27.483: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jun 8 11:08:27.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4t9cq' Jun 8 11:08:27.719: INFO: stderr: "" Jun 8 11:08:27.719: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 8 11:08:27.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4t9cq' Jun 8 11:08:28.017: INFO: stderr: "" Jun 8 11:08:28.017: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 8 11:08:29.022: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:08:29.022: INFO: Found 0 / 1 Jun 8 11:08:30.022: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:08:30.022: INFO: Found 0 / 1 Jun 8 11:08:31.022: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:08:31.022: INFO: Found 0 / 1 Jun 8 11:08:32.021: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:08:32.021: INFO: Found 1 / 1 Jun 8 11:08:32.021: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 8 11:08:32.023: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:08:32.023: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 8 11:08:32.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-dnxhm --namespace=e2e-tests-kubectl-4t9cq' Jun 8 11:08:32.142: INFO: stderr: "" Jun 8 11:08:32.142: INFO: stdout: "Name: redis-master-dnxhm\nNamespace: e2e-tests-kubectl-4t9cq\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Mon, 08 Jun 2020 11:08:27 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.89\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://69f9f3767dcee2a44fad3974f04aa15e3bc655057a72f7d10a13d34d48f1c42e\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 08 Jun 2020 11:08:30 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-d7m5d (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-d7m5d:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-d7m5d\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned e2e-tests-kubectl-4t9cq/redis-master-dnxhm to hunter-worker2\n Normal Pulled 4s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 2s kubelet, hunter-worker2 Started container\n" Jun 8 11:08:32.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-4t9cq' Jun 8 11:08:32.267: INFO: stderr: "" Jun 8 11:08:32.267: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-4t9cq\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-dnxhm\n" Jun 8 11:08:32.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-4t9cq' Jun 8 11:08:32.369: INFO: stderr: "" Jun 8 11:08:32.369: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-4t9cq\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.104.111.0\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.89:6379\nSession Affinity: None\nEvents: \n" Jun 8 11:08:32.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jun 8 11:08:32.493: INFO: stderr: "" Jun 8 11:08:32.493: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 08 Jun 2020 11:08:29 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 08 Jun 2020 11:08:29 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 08 Jun 2020 11:08:29 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 08 Jun 2020 11:08:29 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 84d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 84d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 84d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 84d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 8 11:08:32.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-4t9cq' Jun 8 11:08:32.611: INFO: stderr: "" Jun 8 11:08:32.611: INFO: stdout: "Name: e2e-tests-kubectl-4t9cq\nLabels: e2e-framework=kubectl\n e2e-run=7261e210-a976-11ea-978f-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:08:32.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4t9cq" for this suite. Jun 8 11:08:56.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:08:56.743: INFO: namespace: e2e-tests-kubectl-4t9cq, resource: bindings, ignored listing per whitelist Jun 8 11:08:56.755: INFO: namespace e2e-tests-kubectl-4t9cq deletion completed in 24.141178472s • [SLOW TEST:29.416 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:08:56.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-726477ca-a978-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:08:56.933: INFO: Waiting up to 5m0s for pod "pod-configmaps-72661103-a978-11ea-978f-0242ac110018" in namespace "e2e-tests-configmap-q8bwx" to be "success or failure" Jun 8 11:08:56.974: INFO: Pod "pod-configmaps-72661103-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 41.37061ms Jun 8 11:08:58.979: INFO: Pod "pod-configmaps-72661103-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045631194s Jun 8 11:09:00.983: INFO: Pod "pod-configmaps-72661103-a978-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050091862s STEP: Saw pod success Jun 8 11:09:00.983: INFO: Pod "pod-configmaps-72661103-a978-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:09:00.986: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-72661103-a978-11ea-978f-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 8 11:09:01.037: INFO: Waiting for pod pod-configmaps-72661103-a978-11ea-978f-0242ac110018 to disappear Jun 8 11:09:01.042: INFO: Pod pod-configmaps-72661103-a978-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:09:01.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-q8bwx" for this suite. Jun 8 11:09:07.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:09:07.166: INFO: namespace: e2e-tests-configmap-q8bwx, resource: bindings, ignored listing per whitelist Jun 8 11:09:07.173: INFO: namespace e2e-tests-configmap-q8bwx deletion completed in 6.12817186s • [SLOW TEST:10.418 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:09:07.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 8 11:09:07.494: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-k6vxn,SelfLink:/api/v1/namespaces/e2e-tests-watch-k6vxn/configmaps/e2e-watch-test-resource-version,UID:78b4a6d9-a978-11ea-99e8-0242ac110002,ResourceVersion:14859219,Generation:0,CreationTimestamp:2020-06-08 11:09:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 8 11:09:07.494: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-k6vxn,SelfLink:/api/v1/namespaces/e2e-tests-watch-k6vxn/configmaps/e2e-watch-test-resource-version,UID:78b4a6d9-a978-11ea-99e8-0242ac110002,ResourceVersion:14859220,Generation:0,CreationTimestamp:2020-06-08 11:09:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:09:07.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-k6vxn" for this suite. Jun 8 11:09:13.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:09:13.624: INFO: namespace: e2e-tests-watch-k6vxn, resource: bindings, ignored listing per whitelist Jun 8 11:09:13.656: INFO: namespace e2e-tests-watch-k6vxn deletion completed in 6.12875959s • [SLOW TEST:6.483 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:09:13.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:09:13.756: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 8 11:09:18.760: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 8 11:09:18.760: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 8 11:09:18.802: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-wmjtl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wmjtl/deployments/test-cleanup-deployment,UID:7f738008-a978-11ea-99e8-0242ac110002,ResourceVersion:14859266,Generation:1,CreationTimestamp:2020-06-08 11:09:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 8 11:09:18.804: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:09:18.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-wmjtl" for this suite. Jun 8 11:09:24.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:09:24.983: INFO: namespace: e2e-tests-deployment-wmjtl, resource: bindings, ignored listing per whitelist Jun 8 11:09:25.005: INFO: namespace e2e-tests-deployment-wmjtl deletion completed in 6.158586734s • [SLOW TEST:11.348 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:09:25.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 8 11:09:25.180: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 8 11:09:30.211: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:09:32.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-5sznn" for this suite. Jun 8 11:09:38.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:09:38.403: INFO: namespace: e2e-tests-replication-controller-5sznn, resource: bindings, ignored listing per whitelist Jun 8 11:09:38.437: INFO: namespace e2e-tests-replication-controller-5sznn deletion completed in 6.139447899s • [SLOW TEST:13.431 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:09:38.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 8 11:09:38.751: INFO: Waiting up to 5m0s for pod "downward-api-8b5a78e7-a978-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-pxpjb" to be "success or failure" Jun 8 11:09:38.808: INFO: Pod "downward-api-8b5a78e7-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 56.84666ms Jun 8 11:09:40.890: INFO: Pod "downward-api-8b5a78e7-a978-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139363824s Jun 8 11:09:42.894: INFO: Pod "downward-api-8b5a78e7-a978-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142936933s STEP: Saw pod success Jun 8 11:09:42.894: INFO: Pod "downward-api-8b5a78e7-a978-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:09:42.896: INFO: Trying to get logs from node hunter-worker pod downward-api-8b5a78e7-a978-11ea-978f-0242ac110018 container dapi-container: STEP: delete the pod Jun 8 11:09:43.053: INFO: Waiting for pod downward-api-8b5a78e7-a978-11ea-978f-0242ac110018 to disappear Jun 8 11:09:43.079: INFO: Pod downward-api-8b5a78e7-a978-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:09:43.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pxpjb" for this suite. Jun 8 11:09:49.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:09:49.113: INFO: namespace: e2e-tests-downward-api-pxpjb, resource: bindings, ignored listing per whitelist Jun 8 11:09:49.173: INFO: namespace e2e-tests-downward-api-pxpjb deletion completed in 6.090108815s • [SLOW TEST:10.736 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:09:49.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 8 11:09:49.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-lj7tr' Jun 8 11:09:50.236: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 8 11:09:50.236: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 8 11:09:50.538: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jun 8 11:09:50.601: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 8 11:09:50.876: INFO: scanned /root for discovery docs: Jun 8 11:09:50.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-lj7tr' Jun 8 11:10:06.881: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 8 11:10:06.881: INFO: stdout: "Created e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04\nScaling up e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 8 11:10:06.882: INFO: stdout: "Created e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04\nScaling up e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 8 11:10:06.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lj7tr' Jun 8 11:10:06.987: INFO: stderr: "" Jun 8 11:10:06.987: INFO: stdout: "e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04-m5qjv " Jun 8 11:10:06.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04-m5qjv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lj7tr' Jun 8 11:10:07.083: INFO: stderr: "" Jun 8 11:10:07.084: INFO: stdout: "true" Jun 8 11:10:07.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04-m5qjv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lj7tr' Jun 8 11:10:07.185: INFO: stderr: "" Jun 8 11:10:07.185: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 8 11:10:07.185: INFO: e2e-test-nginx-rc-70d56a7777f59aa144e17aa105f9da04-m5qjv is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jun 8 11:10:07.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lj7tr' Jun 8 11:10:07.294: INFO: stderr: "" Jun 8 11:10:07.294: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:10:07.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lj7tr" for this suite. Jun 8 11:10:29.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:10:29.337: INFO: namespace: e2e-tests-kubectl-lj7tr, resource: bindings, ignored listing per whitelist Jun 8 11:10:29.387: INFO: namespace e2e-tests-kubectl-lj7tr deletion completed in 22.090168149s • [SLOW TEST:40.214 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:10:29.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 8 11:10:30.612: INFO: Pod name wrapped-volume-race-aa374ced-a978-11ea-978f-0242ac110018: Found 0 pods out of 5 Jun 8 11:10:35.619: INFO: Pod name wrapped-volume-race-aa374ced-a978-11ea-978f-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-aa374ced-a978-11ea-978f-0242ac110018 in namespace e2e-tests-emptydir-wrapper-rf5h8, will wait for the garbage collector to delete the pods Jun 8 11:11:47.768: INFO: Deleting ReplicationController wrapped-volume-race-aa374ced-a978-11ea-978f-0242ac110018 took: 7.408045ms Jun 8 11:11:47.969: INFO: Terminating ReplicationController wrapped-volume-race-aa374ced-a978-11ea-978f-0242ac110018 pods took: 200.266382ms STEP: Creating RC which spawns configmap-volume pods Jun 8 11:12:32.653: INFO: Pod name wrapped-volume-race-f2e83178-a978-11ea-978f-0242ac110018: Found 0 pods out of 5 Jun 8 11:12:37.660: INFO: Pod name wrapped-volume-race-f2e83178-a978-11ea-978f-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f2e83178-a978-11ea-978f-0242ac110018 in namespace e2e-tests-emptydir-wrapper-rf5h8, will wait for the garbage collector to delete the pods Jun 8 11:14:31.746: INFO: Deleting ReplicationController wrapped-volume-race-f2e83178-a978-11ea-978f-0242ac110018 took: 8.864214ms Jun 8 11:14:31.847: INFO: Terminating ReplicationController wrapped-volume-race-f2e83178-a978-11ea-978f-0242ac110018 pods took: 100.237545ms STEP: Creating RC which spawns configmap-volume pods Jun 8 11:15:11.376: INFO: Pod name wrapped-volume-race-519b0d7c-a979-11ea-978f-0242ac110018: Found 0 pods out of 5 Jun 8 11:15:16.386: INFO: Pod name wrapped-volume-race-519b0d7c-a979-11ea-978f-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-519b0d7c-a979-11ea-978f-0242ac110018 in namespace e2e-tests-emptydir-wrapper-rf5h8, will wait for the garbage collector to delete the pods Jun 8 11:17:30.472: INFO: Deleting ReplicationController wrapped-volume-race-519b0d7c-a979-11ea-978f-0242ac110018 took: 7.920583ms Jun 8 11:17:30.572: INFO: Terminating ReplicationController wrapped-volume-race-519b0d7c-a979-11ea-978f-0242ac110018 pods took: 100.273911ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:18:13.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-rf5h8" for this suite. Jun 8 11:18:21.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:18:21.351: INFO: namespace: e2e-tests-emptydir-wrapper-rf5h8, resource: bindings, ignored listing per whitelist Jun 8 11:18:21.411: INFO: namespace e2e-tests-emptydir-wrapper-rf5h8 deletion completed in 8.106494351s • [SLOW TEST:472.024 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:18:21.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0608 11:18:52.149859 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 8 11:18:52.149: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:18:52.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ppnm5" for this suite. Jun 8 11:18:58.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:18:58.216: INFO: namespace: e2e-tests-gc-ppnm5, resource: bindings, ignored listing per whitelist Jun 8 11:18:58.241: INFO: namespace e2e-tests-gc-ppnm5 deletion completed in 6.088576853s • [SLOW TEST:36.829 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:18:58.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 8 11:18:58.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-zjswd' Jun 8 11:19:01.107: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 8 11:19:01.107: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jun 8 11:19:01.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-zjswd' Jun 8 11:19:01.242: INFO: stderr: "" Jun 8 11:19:01.242: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:19:01.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zjswd" for this suite. Jun 8 11:19:23.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:19:23.273: INFO: namespace: e2e-tests-kubectl-zjswd, resource: bindings, ignored listing per whitelist Jun 8 11:19:23.316: INFO: namespace e2e-tests-kubectl-zjswd deletion completed in 22.07123796s • [SLOW TEST:25.074 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:19:23.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:19:23.460: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 8 11:19:23.471: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:23.474: INFO: Number of nodes with available pods: 0 Jun 8 11:19:23.474: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:24.839: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:24.869: INFO: Number of nodes with available pods: 0 Jun 8 11:19:24.869: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:25.478: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:25.481: INFO: Number of nodes with available pods: 0 Jun 8 11:19:25.481: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:26.834: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:26.838: INFO: Number of nodes with available pods: 0 Jun 8 11:19:26.838: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:27.478: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:27.481: INFO: Number of nodes with available pods: 0 Jun 8 11:19:27.481: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:28.509: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:28.550: INFO: Number of nodes with available pods: 0 Jun 8 11:19:28.550: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:29.478: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:29.480: INFO: Number of nodes with available pods: 2 Jun 8 11:19:29.480: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 8 11:19:29.567: INFO: Wrong image for pod: daemon-set-75z5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:29.567: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:29.612: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:30.618: INFO: Wrong image for pod: daemon-set-75z5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:30.618: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:30.621: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:31.617: INFO: Wrong image for pod: daemon-set-75z5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:31.617: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:31.621: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:32.616: INFO: Wrong image for pod: daemon-set-75z5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:32.616: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:32.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:33.616: INFO: Wrong image for pod: daemon-set-75z5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:33.616: INFO: Pod daemon-set-75z5j is not available Jun 8 11:19:33.616: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:33.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:34.635: INFO: Pod daemon-set-mtztg is not available Jun 8 11:19:34.635: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:34.639: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:35.617: INFO: Pod daemon-set-mtztg is not available Jun 8 11:19:35.617: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:35.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:36.647: INFO: Pod daemon-set-mtztg is not available Jun 8 11:19:36.647: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:36.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:37.616: INFO: Pod daemon-set-mtztg is not available Jun 8 11:19:37.616: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:37.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:38.688: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:38.702: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:39.617: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:39.617: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:39.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:40.616: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:40.616: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:40.626: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:41.617: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:41.617: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:41.621: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:42.923: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:42.923: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:42.927: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:43.617: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:43.617: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:43.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:44.617: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:44.617: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:44.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:45.616: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:45.616: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:45.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:46.833: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:46.833: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:46.836: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:47.617: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:47.617: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:47.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:48.616: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:48.616: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:48.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:49.616: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:49.616: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:49.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:50.617: INFO: Wrong image for pod: daemon-set-pcw4w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 8 11:19:50.617: INFO: Pod daemon-set-pcw4w is not available Jun 8 11:19:50.621: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:51.616: INFO: Pod daemon-set-z9gn8 is not available Jun 8 11:19:51.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 8 11:19:51.624: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:51.627: INFO: Number of nodes with available pods: 1 Jun 8 11:19:51.627: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:52.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:52.636: INFO: Number of nodes with available pods: 1 Jun 8 11:19:52.636: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:53.990: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:53.993: INFO: Number of nodes with available pods: 1 Jun 8 11:19:53.993: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:54.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:54.635: INFO: Number of nodes with available pods: 1 Jun 8 11:19:54.635: INFO: Node hunter-worker is running more than one daemon pod Jun 8 11:19:55.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 11:19:55.636: INFO: Number of nodes with available pods: 2 Jun 8 11:19:55.636: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nwvh9, will wait for the garbage collector to delete the pods Jun 8 11:19:55.711: INFO: Deleting DaemonSet.extensions daemon-set took: 6.517377ms Jun 8 11:19:55.811: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.210203ms Jun 8 11:20:11.315: INFO: Number of nodes with available pods: 0 Jun 8 11:20:11.316: INFO: Number of running nodes: 0, number of available pods: 0 Jun 8 11:20:11.318: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nwvh9/daemonsets","resourceVersion":"14861296"},"items":null} Jun 8 11:20:11.321: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nwvh9/pods","resourceVersion":"14861296"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:20:11.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-nwvh9" for this suite. Jun 8 11:20:17.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:20:17.358: INFO: namespace: e2e-tests-daemonsets-nwvh9, resource: bindings, ignored listing per whitelist Jun 8 11:20:17.411: INFO: namespace e2e-tests-daemonsets-nwvh9 deletion completed in 6.077769501s • [SLOW TEST:54.095 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:20:17.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-081c8b37-a97a-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:20:17.554: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-081d6543-a97a-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-9glms" to be "success or failure" Jun 8 11:20:17.576: INFO: Pod "pod-projected-secrets-081d6543-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.254206ms Jun 8 11:20:19.580: INFO: Pod "pod-projected-secrets-081d6543-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026110103s Jun 8 11:20:21.618: INFO: Pod "pod-projected-secrets-081d6543-a97a-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063959743s STEP: Saw pod success Jun 8 11:20:21.618: INFO: Pod "pod-projected-secrets-081d6543-a97a-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:20:21.628: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-081d6543-a97a-11ea-978f-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 8 11:20:21.658: INFO: Waiting for pod pod-projected-secrets-081d6543-a97a-11ea-978f-0242ac110018 to disappear Jun 8 11:20:21.670: INFO: Pod pod-projected-secrets-081d6543-a97a-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:20:21.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9glms" for this suite. Jun 8 11:20:27.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:20:27.729: INFO: namespace: e2e-tests-projected-9glms, resource: bindings, ignored listing per whitelist Jun 8 11:20:27.764: INFO: namespace e2e-tests-projected-9glms deletion completed in 6.092210637s • [SLOW TEST:10.353 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:20:27.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jun 8 11:20:27.871: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 8 11:20:27.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:28.162: INFO: stderr: "" Jun 8 11:20:28.162: INFO: stdout: "service/redis-slave created\n" Jun 8 11:20:28.162: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 8 11:20:28.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:28.537: INFO: stderr: "" Jun 8 11:20:28.537: INFO: stdout: "service/redis-master created\n" Jun 8 11:20:28.537: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 8 11:20:28.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:28.848: INFO: stderr: "" Jun 8 11:20:28.848: INFO: stdout: "service/frontend created\n" Jun 8 11:20:28.848: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 8 11:20:28.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:29.118: INFO: stderr: "" Jun 8 11:20:29.118: INFO: stdout: "deployment.extensions/frontend created\n" Jun 8 11:20:29.118: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 8 11:20:29.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:29.521: INFO: stderr: "" Jun 8 11:20:29.521: INFO: stdout: "deployment.extensions/redis-master created\n" Jun 8 11:20:29.522: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 8 11:20:29.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:29.898: INFO: stderr: "" Jun 8 11:20:29.898: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jun 8 11:20:29.898: INFO: Waiting for all frontend pods to be Running. Jun 8 11:20:44.949: INFO: Waiting for frontend to serve content. Jun 8 11:20:44.964: INFO: Trying to add a new entry to the guestbook. Jun 8 11:20:44.974: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 8 11:20:44.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:45.890: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 11:20:45.890: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 8 11:20:45.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:47.514: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 11:20:47.515: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 8 11:20:47.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:49.591: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 11:20:49.592: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 8 11:20:49.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:49.910: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 11:20:49.910: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 8 11:20:49.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:50.343: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 11:20:50.343: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 8 11:20:50.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s2klq' Jun 8 11:20:50.961: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 11:20:50.961: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:20:50.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s2klq" for this suite. Jun 8 11:21:33.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:21:33.733: INFO: namespace: e2e-tests-kubectl-s2klq, resource: bindings, ignored listing per whitelist Jun 8 11:21:33.773: INFO: namespace e2e-tests-kubectl-s2klq deletion completed in 42.677832933s • [SLOW TEST:66.008 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:21:33.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 8 11:21:42.020: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:21:42.068: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:21:44.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:21:44.073: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:21:46.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:21:46.072: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:21:48.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:21:48.073: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:21:50.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:21:50.072: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:21:52.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:21:52.072: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:21:54.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:21:54.073: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:21:56.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:21:56.073: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:21:58.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:21:58.071: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:22:00.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:22:00.073: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:22:02.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:22:02.071: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:22:04.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:22:04.073: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:22:06.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:22:06.073: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:22:08.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:22:08.072: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:22:10.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:22:10.074: INFO: Pod pod-with-poststart-exec-hook still exists Jun 8 11:22:12.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 8 11:22:12.072: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:22:12.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9s8nr" for this suite. Jun 8 11:22:34.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:22:34.188: INFO: namespace: e2e-tests-container-lifecycle-hook-9s8nr, resource: bindings, ignored listing per whitelist Jun 8 11:22:34.203: INFO: namespace e2e-tests-container-lifecycle-hook-9s8nr deletion completed in 22.12546026s • [SLOW TEST:60.430 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:22:34.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jun 8 11:22:34.347: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-p7fg2" to be "success or failure" Jun 8 11:22:34.356: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.455663ms Jun 8 11:22:36.360: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013365965s Jun 8 11:22:38.365: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017992217s Jun 8 11:22:40.368: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021406854s Jun 8 11:22:42.372: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.025506474s Jun 8 11:22:44.377: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.029938221s STEP: Saw pod success Jun 8 11:22:44.377: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 8 11:22:44.379: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 8 11:22:44.399: INFO: Waiting for pod pod-host-path-test to disappear Jun 8 11:22:44.402: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:22:44.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-p7fg2" for this suite. Jun 8 11:22:50.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:22:50.438: INFO: namespace: e2e-tests-hostpath-p7fg2, resource: bindings, ignored listing per whitelist Jun 8 11:22:50.487: INFO: namespace e2e-tests-hostpath-p7fg2 deletion completed in 6.081559091s • [SLOW TEST:16.284 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:22:50.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 8 11:22:50.622: INFO: Waiting up to 5m0s for pod "pod-6354fde1-a97a-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-jk8x6" to be "success or failure" Jun 8 11:22:50.631: INFO: Pod "pod-6354fde1-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607056ms Jun 8 11:22:52.635: INFO: Pod "pod-6354fde1-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012781827s Jun 8 11:22:54.639: INFO: Pod "pod-6354fde1-a97a-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01684795s STEP: Saw pod success Jun 8 11:22:54.639: INFO: Pod "pod-6354fde1-a97a-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:22:54.642: INFO: Trying to get logs from node hunter-worker pod pod-6354fde1-a97a-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:22:54.887: INFO: Waiting for pod pod-6354fde1-a97a-11ea-978f-0242ac110018 to disappear Jun 8 11:22:55.144: INFO: Pod pod-6354fde1-a97a-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:22:55.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jk8x6" for this suite. Jun 8 11:23:01.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:23:01.267: INFO: namespace: e2e-tests-emptydir-jk8x6, resource: bindings, ignored listing per whitelist Jun 8 11:23:01.271: INFO: namespace e2e-tests-emptydir-jk8x6 deletion completed in 6.122405006s • [SLOW TEST:10.783 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:23:01.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 8 11:23:02.016: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fnzrb,SelfLink:/api/v1/namespaces/e2e-tests-watch-fnzrb/configmaps/e2e-watch-test-label-changed,UID:69e245db-a97a-11ea-99e8-0242ac110002,ResourceVersion:14861961,Generation:0,CreationTimestamp:2020-06-08 11:23:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 8 11:23:02.016: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fnzrb,SelfLink:/api/v1/namespaces/e2e-tests-watch-fnzrb/configmaps/e2e-watch-test-label-changed,UID:69e245db-a97a-11ea-99e8-0242ac110002,ResourceVersion:14861963,Generation:0,CreationTimestamp:2020-06-08 11:23:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 8 11:23:02.016: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fnzrb,SelfLink:/api/v1/namespaces/e2e-tests-watch-fnzrb/configmaps/e2e-watch-test-label-changed,UID:69e245db-a97a-11ea-99e8-0242ac110002,ResourceVersion:14861965,Generation:0,CreationTimestamp:2020-06-08 11:23:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 8 11:23:12.049: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fnzrb,SelfLink:/api/v1/namespaces/e2e-tests-watch-fnzrb/configmaps/e2e-watch-test-label-changed,UID:69e245db-a97a-11ea-99e8-0242ac110002,ResourceVersion:14861985,Generation:0,CreationTimestamp:2020-06-08 11:23:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 8 11:23:12.049: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fnzrb,SelfLink:/api/v1/namespaces/e2e-tests-watch-fnzrb/configmaps/e2e-watch-test-label-changed,UID:69e245db-a97a-11ea-99e8-0242ac110002,ResourceVersion:14861986,Generation:0,CreationTimestamp:2020-06-08 11:23:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 8 11:23:12.049: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fnzrb,SelfLink:/api/v1/namespaces/e2e-tests-watch-fnzrb/configmaps/e2e-watch-test-label-changed,UID:69e245db-a97a-11ea-99e8-0242ac110002,ResourceVersion:14861987,Generation:0,CreationTimestamp:2020-06-08 11:23:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:23:12.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-fnzrb" for this suite. Jun 8 11:23:18.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:23:18.230: INFO: namespace: e2e-tests-watch-fnzrb, resource: bindings, ignored listing per whitelist Jun 8 11:23:18.257: INFO: namespace e2e-tests-watch-fnzrb deletion completed in 6.198187359s • [SLOW TEST:16.986 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:23:18.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-5grc STEP: Creating a pod to test atomic-volume-subpath Jun 8 11:23:18.665: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5grc" in namespace "e2e-tests-subpath-lr4rq" to be "success or failure" Jun 8 11:23:18.721: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.205208ms Jun 8 11:23:20.788: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123268333s Jun 8 11:23:22.819: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154095895s Jun 8 11:23:24.823: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157772816s Jun 8 11:23:26.826: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 8.161590282s Jun 8 11:23:28.831: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 10.166181283s Jun 8 11:23:30.836: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 12.170722035s Jun 8 11:23:32.840: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 14.174830809s Jun 8 11:23:34.843: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 16.178196537s Jun 8 11:23:36.847: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 18.182443112s Jun 8 11:23:38.852: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 20.186725903s Jun 8 11:23:40.856: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 22.191559054s Jun 8 11:23:42.861: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 24.196082174s Jun 8 11:23:44.865: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Running", Reason="", readiness=false. Elapsed: 26.200536051s Jun 8 11:23:46.870: INFO: Pod "pod-subpath-test-projected-5grc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.204962724s STEP: Saw pod success Jun 8 11:23:46.870: INFO: Pod "pod-subpath-test-projected-5grc" satisfied condition "success or failure" Jun 8 11:23:46.873: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-5grc container test-container-subpath-projected-5grc: STEP: delete the pod Jun 8 11:23:46.925: INFO: Waiting for pod pod-subpath-test-projected-5grc to disappear Jun 8 11:23:46.939: INFO: Pod pod-subpath-test-projected-5grc no longer exists STEP: Deleting pod pod-subpath-test-projected-5grc Jun 8 11:23:46.939: INFO: Deleting pod "pod-subpath-test-projected-5grc" in namespace "e2e-tests-subpath-lr4rq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:23:46.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lr4rq" for this suite. Jun 8 11:23:52.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:23:53.008: INFO: namespace: e2e-tests-subpath-lr4rq, resource: bindings, ignored listing per whitelist Jun 8 11:23:53.074: INFO: namespace e2e-tests-subpath-lr4rq deletion completed in 6.128864781s • [SLOW TEST:34.816 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:23:53.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:23:53.172: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:23:57.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7744x" for this suite. Jun 8 11:24:37.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:24:37.261: INFO: namespace: e2e-tests-pods-7744x, resource: bindings, ignored listing per whitelist Jun 8 11:24:37.316: INFO: namespace e2e-tests-pods-7744x deletion completed in 40.080768073s • [SLOW TEST:44.242 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:24:37.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:24:37.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a30850aa-a97a-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-qshdq" to be "success or failure" Jun 8 11:24:37.466: INFO: Pod "downwardapi-volume-a30850aa-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120035ms Jun 8 11:24:39.469: INFO: Pod "downwardapi-volume-a30850aa-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007835402s Jun 8 11:24:41.472: INFO: Pod "downwardapi-volume-a30850aa-a97a-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010871557s STEP: Saw pod success Jun 8 11:24:41.472: INFO: Pod "downwardapi-volume-a30850aa-a97a-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:24:41.474: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a30850aa-a97a-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:24:41.502: INFO: Waiting for pod downwardapi-volume-a30850aa-a97a-11ea-978f-0242ac110018 to disappear Jun 8 11:24:41.507: INFO: Pod downwardapi-volume-a30850aa-a97a-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:24:41.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qshdq" for this suite. Jun 8 11:24:47.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:24:47.544: INFO: namespace: e2e-tests-downward-api-qshdq, resource: bindings, ignored listing per whitelist Jun 8 11:24:47.583: INFO: namespace e2e-tests-downward-api-qshdq deletion completed in 6.07251381s • [SLOW TEST:10.266 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:24:47.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-a91d7a6e-a97a-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:24:47.764: INFO: Waiting up to 5m0s for pod "pod-secrets-a92c876f-a97a-11ea-978f-0242ac110018" in namespace "e2e-tests-secrets-bznhf" to be "success or failure" Jun 8 11:24:47.772: INFO: Pod "pod-secrets-a92c876f-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.848434ms Jun 8 11:24:49.879: INFO: Pod "pod-secrets-a92c876f-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11503172s Jun 8 11:24:51.884: INFO: Pod "pod-secrets-a92c876f-a97a-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119340503s STEP: Saw pod success Jun 8 11:24:51.884: INFO: Pod "pod-secrets-a92c876f-a97a-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:24:51.887: INFO: Trying to get logs from node hunter-worker pod pod-secrets-a92c876f-a97a-11ea-978f-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 8 11:24:51.905: INFO: Waiting for pod pod-secrets-a92c876f-a97a-11ea-978f-0242ac110018 to disappear Jun 8 11:24:51.922: INFO: Pod pod-secrets-a92c876f-a97a-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:24:51.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bznhf" for this suite. Jun 8 11:24:57.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:24:58.029: INFO: namespace: e2e-tests-secrets-bznhf, resource: bindings, ignored listing per whitelist Jun 8 11:24:58.063: INFO: namespace e2e-tests-secrets-bznhf deletion completed in 6.13790069s STEP: Destroying namespace "e2e-tests-secret-namespace-w5hms" for this suite. Jun 8 11:25:04.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:25:04.107: INFO: namespace: e2e-tests-secret-namespace-w5hms, resource: bindings, ignored listing per whitelist Jun 8 11:25:04.164: INFO: namespace e2e-tests-secret-namespace-w5hms deletion completed in 6.10043917s • [SLOW TEST:16.580 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:25:04.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 8 11:25:08.277: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-b300a526-a97a-11ea-978f-0242ac110018,GenerateName:,Namespace:e2e-tests-events-8hrqt,SelfLink:/api/v1/namespaces/e2e-tests-events-8hrqt/pods/send-events-b300a526-a97a-11ea-978f-0242ac110018,UID:b302bfde-a97a-11ea-99e8-0242ac110002,ResourceVersion:14862356,Generation:0,CreationTimestamp:2020-06-08 11:25:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 248974933,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kqbxg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqbxg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-kqbxg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023d0950} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023d0970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:25:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:25:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:25:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:25:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.102,StartTime:2020-06-08 11:25:04 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-08 11:25:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://f129e31e48624f1adc0ebd010e59cbfe1a4ccc5c3c303ae88f1d53447b500b50}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 8 11:25:10.282: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 8 11:25:12.289: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:25:12.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-8hrqt" for this suite. Jun 8 11:25:54.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:25:54.382: INFO: namespace: e2e-tests-events-8hrqt, resource: bindings, ignored listing per whitelist Jun 8 11:25:54.383: INFO: namespace e2e-tests-events-8hrqt deletion completed in 42.084269382s • [SLOW TEST:50.219 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:25:54.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d0f0acb8-a97a-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:25:54.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0f148a6-a97a-11ea-978f-0242ac110018" in namespace "e2e-tests-configmap-5jnfg" to be "success or failure" Jun 8 11:25:54.523: INFO: Pod "pod-configmaps-d0f148a6-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.614678ms Jun 8 11:25:56.545: INFO: Pod "pod-configmaps-d0f148a6-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024602439s Jun 8 11:25:58.549: INFO: Pod "pod-configmaps-d0f148a6-a97a-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.028980188s Jun 8 11:26:00.554: INFO: Pod "pod-configmaps-d0f148a6-a97a-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03352152s STEP: Saw pod success Jun 8 11:26:00.554: INFO: Pod "pod-configmaps-d0f148a6-a97a-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:26:00.557: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-d0f148a6-a97a-11ea-978f-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 8 11:26:00.577: INFO: Waiting for pod pod-configmaps-d0f148a6-a97a-11ea-978f-0242ac110018 to disappear Jun 8 11:26:00.582: INFO: Pod pod-configmaps-d0f148a6-a97a-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:26:00.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5jnfg" for this suite. Jun 8 11:26:08.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:26:08.657: INFO: namespace: e2e-tests-configmap-5jnfg, resource: bindings, ignored listing per whitelist Jun 8 11:26:08.677: INFO: namespace e2e-tests-configmap-5jnfg deletion completed in 8.091640566s • [SLOW TEST:14.294 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:26:08.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-d9ce3af9-a97a-11ea-978f-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:26:15.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nndnl" for this suite. Jun 8 11:26:37.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:26:37.481: INFO: namespace: e2e-tests-configmap-nndnl, resource: bindings, ignored listing per whitelist Jun 8 11:26:37.491: INFO: namespace e2e-tests-configmap-nndnl deletion completed in 22.06982043s • [SLOW TEST:28.814 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:26:37.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jun 8 11:26:37.647: INFO: Waiting up to 5m0s for pod "client-containers-eaab4e81-a97a-11ea-978f-0242ac110018" in namespace "e2e-tests-containers-tn8m6" to be "success or failure" Jun 8 11:26:37.649: INFO: Pod "client-containers-eaab4e81-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107181ms Jun 8 11:26:39.943: INFO: Pod "client-containers-eaab4e81-a97a-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296032439s Jun 8 11:26:41.976: INFO: Pod "client-containers-eaab4e81-a97a-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329507709s STEP: Saw pod success Jun 8 11:26:41.976: INFO: Pod "client-containers-eaab4e81-a97a-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:26:42.003: INFO: Trying to get logs from node hunter-worker pod client-containers-eaab4e81-a97a-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:26:42.034: INFO: Waiting for pod client-containers-eaab4e81-a97a-11ea-978f-0242ac110018 to disappear Jun 8 11:26:42.150: INFO: Pod client-containers-eaab4e81-a97a-11ea-978f-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:26:42.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-tn8m6" for this suite. Jun 8 11:26:48.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:26:48.260: INFO: namespace: e2e-tests-containers-tn8m6, resource: bindings, ignored listing per whitelist Jun 8 11:26:48.310: INFO: namespace e2e-tests-containers-tn8m6 deletion completed in 6.15663364s • [SLOW TEST:10.818 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:26:48.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-c8gn STEP: Creating a pod to test atomic-volume-subpath Jun 8 11:26:48.412: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-c8gn" in namespace "e2e-tests-subpath-kgtbz" to be "success or failure" Jun 8 11:26:48.415: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.925521ms Jun 8 11:26:50.457: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045686417s Jun 8 11:26:52.590: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178047703s Jun 8 11:26:54.593: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180957406s Jun 8 11:26:56.635: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=true. Elapsed: 8.22383631s Jun 8 11:26:58.714: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=false. Elapsed: 10.302099958s Jun 8 11:27:00.721: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=false. Elapsed: 12.309926298s Jun 8 11:27:02.725: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=false. Elapsed: 14.313321151s Jun 8 11:27:04.729: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=false. Elapsed: 16.31735824s Jun 8 11:27:06.733: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=false. Elapsed: 18.321882036s Jun 8 11:27:08.738: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=false. Elapsed: 20.326154067s Jun 8 11:27:10.742: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=false. Elapsed: 22.330459417s Jun 8 11:27:12.745: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=false. Elapsed: 24.333929487s Jun 8 11:27:14.749: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Running", Reason="", readiness=false. Elapsed: 26.33699284s Jun 8 11:27:16.752: INFO: Pod "pod-subpath-test-configmap-c8gn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.340067817s STEP: Saw pod success Jun 8 11:27:16.752: INFO: Pod "pod-subpath-test-configmap-c8gn" satisfied condition "success or failure" Jun 8 11:27:16.754: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-c8gn container test-container-subpath-configmap-c8gn: STEP: delete the pod Jun 8 11:27:16.770: INFO: Waiting for pod pod-subpath-test-configmap-c8gn to disappear Jun 8 11:27:16.787: INFO: Pod pod-subpath-test-configmap-c8gn no longer exists STEP: Deleting pod pod-subpath-test-configmap-c8gn Jun 8 11:27:16.787: INFO: Deleting pod "pod-subpath-test-configmap-c8gn" in namespace "e2e-tests-subpath-kgtbz" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:27:16.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-kgtbz" for this suite. Jun 8 11:27:22.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:27:22.872: INFO: namespace: e2e-tests-subpath-kgtbz, resource: bindings, ignored listing per whitelist Jun 8 11:27:22.874: INFO: namespace e2e-tests-subpath-kgtbz deletion completed in 6.081728543s • [SLOW TEST:34.564 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:27:22.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 8 11:27:23.047: INFO: Waiting up to 5m0s for pod "pod-05b8cf11-a97b-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-6jkcp" to be "success or failure" Jun 8 11:27:23.068: INFO: Pod "pod-05b8cf11-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.515989ms Jun 8 11:27:25.072: INFO: Pod "pod-05b8cf11-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025087889s Jun 8 11:27:27.076: INFO: Pod "pod-05b8cf11-a97b-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028821011s STEP: Saw pod success Jun 8 11:27:27.076: INFO: Pod "pod-05b8cf11-a97b-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:27:27.079: INFO: Trying to get logs from node hunter-worker pod pod-05b8cf11-a97b-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:27:27.099: INFO: Waiting for pod pod-05b8cf11-a97b-11ea-978f-0242ac110018 to disappear Jun 8 11:27:27.118: INFO: Pod pod-05b8cf11-a97b-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:27:27.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6jkcp" for this suite. Jun 8 11:27:33.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:27:33.196: INFO: namespace: e2e-tests-emptydir-6jkcp, resource: bindings, ignored listing per whitelist Jun 8 11:27:33.218: INFO: namespace e2e-tests-emptydir-6jkcp deletion completed in 6.09662597s • [SLOW TEST:10.344 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:27:33.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-wwmlw Jun 8 11:27:37.349: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-wwmlw STEP: checking the pod's current state and verifying that restartCount is present Jun 8 11:27:37.351: INFO: Initial restart count of pod liveness-exec is 0 Jun 8 11:28:29.539: INFO: Restart count of pod e2e-tests-container-probe-wwmlw/liveness-exec is now 1 (52.187495703s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:28:29.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wwmlw" for this suite. Jun 8 11:28:35.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:28:35.658: INFO: namespace: e2e-tests-container-probe-wwmlw, resource: bindings, ignored listing per whitelist Jun 8 11:28:35.702: INFO: namespace e2e-tests-container-probe-wwmlw deletion completed in 6.140789333s • [SLOW TEST:62.485 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:28:35.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:28:35.860: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:28:36.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-87psr" for this suite. Jun 8 11:28:42.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:28:42.967: INFO: namespace: e2e-tests-custom-resource-definition-87psr, resource: bindings, ignored listing per whitelist Jun 8 11:28:43.033: INFO: namespace e2e-tests-custom-resource-definition-87psr deletion completed in 6.109683284s • [SLOW TEST:7.330 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:28:43.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jun 8 11:28:43.148: INFO: Waiting up to 5m0s for pod "client-containers-35792a6b-a97b-11ea-978f-0242ac110018" in namespace "e2e-tests-containers-slscl" to be "success or failure" Jun 8 11:28:43.167: INFO: Pod "client-containers-35792a6b-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.566907ms Jun 8 11:28:45.176: INFO: Pod "client-containers-35792a6b-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027992808s Jun 8 11:28:47.180: INFO: Pod "client-containers-35792a6b-a97b-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032092342s STEP: Saw pod success Jun 8 11:28:47.180: INFO: Pod "client-containers-35792a6b-a97b-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:28:47.182: INFO: Trying to get logs from node hunter-worker2 pod client-containers-35792a6b-a97b-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:28:47.234: INFO: Waiting for pod client-containers-35792a6b-a97b-11ea-978f-0242ac110018 to disappear Jun 8 11:28:47.238: INFO: Pod client-containers-35792a6b-a97b-11ea-978f-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:28:47.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-slscl" for this suite. Jun 8 11:28:53.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:28:53.324: INFO: namespace: e2e-tests-containers-slscl, resource: bindings, ignored listing per whitelist Jun 8 11:28:53.376: INFO: namespace e2e-tests-containers-slscl deletion completed in 6.134984256s • [SLOW TEST:10.342 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:28:53.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-8227t in namespace e2e-tests-proxy-49872 I0608 11:28:53.551158 6 runners.go:184] Created replication controller with name: proxy-service-8227t, namespace: e2e-tests-proxy-49872, replica count: 1 I0608 11:28:54.601599 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0608 11:28:55.601851 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0608 11:28:56.602115 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0608 11:28:57.602349 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0608 11:28:58.602622 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0608 11:28:59.602893 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0608 11:29:00.603138 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0608 11:29:01.603402 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0608 11:29:02.603675 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0608 11:29:03.603930 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0608 11:29:04.604130 6 runners.go:184] proxy-service-8227t Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 8 11:29:04.607: INFO: setup took 11.145483574s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 8 11:29:04.612: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-49872/pods/http:proxy-service-8227t-2qkwg:162/proxy/: bar (200; 4.24489ms) Jun 8 11:29:04.615: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-49872/pods/http:proxy-service-8227t-2qkwg:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-8sjt STEP: Creating a pod to test atomic-volume-subpath Jun 8 11:29:18.070: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8sjt" in namespace "e2e-tests-subpath-k9x4k" to be "success or failure" Jun 8 11:29:18.117: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Pending", Reason="", readiness=false. Elapsed: 47.393714ms Jun 8 11:29:20.122: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051897403s Jun 8 11:29:22.126: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056357422s Jun 8 11:29:24.142: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072127246s Jun 8 11:29:26.146: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 8.076100059s Jun 8 11:29:28.150: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 10.080491519s Jun 8 11:29:30.155: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 12.084771661s Jun 8 11:29:32.159: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 14.089032253s Jun 8 11:29:34.163: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 16.093258536s Jun 8 11:29:36.166: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 18.096319986s Jun 8 11:29:38.171: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 20.100834731s Jun 8 11:29:40.175: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 22.105148478s Jun 8 11:29:42.181: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 24.111301393s Jun 8 11:29:44.201: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Running", Reason="", readiness=false. Elapsed: 26.131256589s Jun 8 11:29:46.205: INFO: Pod "pod-subpath-test-secret-8sjt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.135365177s STEP: Saw pod success Jun 8 11:29:46.205: INFO: Pod "pod-subpath-test-secret-8sjt" satisfied condition "success or failure" Jun 8 11:29:46.208: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-8sjt container test-container-subpath-secret-8sjt: STEP: delete the pod Jun 8 11:29:46.235: INFO: Waiting for pod pod-subpath-test-secret-8sjt to disappear Jun 8 11:29:46.239: INFO: Pod pod-subpath-test-secret-8sjt no longer exists STEP: Deleting pod pod-subpath-test-secret-8sjt Jun 8 11:29:46.239: INFO: Deleting pod "pod-subpath-test-secret-8sjt" in namespace "e2e-tests-subpath-k9x4k" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:29:46.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-k9x4k" for this suite. Jun 8 11:29:52.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:29:52.380: INFO: namespace: e2e-tests-subpath-k9x4k, resource: bindings, ignored listing per whitelist Jun 8 11:29:52.383: INFO: namespace e2e-tests-subpath-k9x4k deletion completed in 6.137293886s • [SLOW TEST:34.472 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:29:52.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:29:52.502: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ecfb649-a97b-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-bnbc8" to be "success or failure" Jun 8 11:29:52.510: INFO: Pod "downwardapi-volume-5ecfb649-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070238ms Jun 8 11:29:54.514: INFO: Pod "downwardapi-volume-5ecfb649-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011895045s Jun 8 11:29:56.519: INFO: Pod "downwardapi-volume-5ecfb649-a97b-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016593279s STEP: Saw pod success Jun 8 11:29:56.519: INFO: Pod "downwardapi-volume-5ecfb649-a97b-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:29:56.522: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5ecfb649-a97b-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:29:56.573: INFO: Waiting for pod downwardapi-volume-5ecfb649-a97b-11ea-978f-0242ac110018 to disappear Jun 8 11:29:56.576: INFO: Pod downwardapi-volume-5ecfb649-a97b-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:29:56.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bnbc8" for this suite. Jun 8 11:30:02.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:30:02.612: INFO: namespace: e2e-tests-projected-bnbc8, resource: bindings, ignored listing per whitelist Jun 8 11:30:02.665: INFO: namespace e2e-tests-projected-bnbc8 deletion completed in 6.085322677s • [SLOW TEST:10.282 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:30:02.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:30:02.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64ebc5ce-a97b-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-c8zjj" to be "success or failure" Jun 8 11:30:02.822: INFO: Pod "downwardapi-volume-64ebc5ce-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.578024ms Jun 8 11:30:04.826: INFO: Pod "downwardapi-volume-64ebc5ce-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017572636s Jun 8 11:30:06.848: INFO: Pod "downwardapi-volume-64ebc5ce-a97b-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040155544s STEP: Saw pod success Jun 8 11:30:06.848: INFO: Pod "downwardapi-volume-64ebc5ce-a97b-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:30:06.851: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-64ebc5ce-a97b-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:30:06.983: INFO: Waiting for pod downwardapi-volume-64ebc5ce-a97b-11ea-978f-0242ac110018 to disappear Jun 8 11:30:06.994: INFO: Pod downwardapi-volume-64ebc5ce-a97b-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:30:06.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c8zjj" for this suite. Jun 8 11:30:13.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:30:13.108: INFO: namespace: e2e-tests-downward-api-c8zjj, resource: bindings, ignored listing per whitelist Jun 8 11:30:13.134: INFO: namespace e2e-tests-downward-api-c8zjj deletion completed in 6.091081225s • [SLOW TEST:10.468 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:30:13.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 8 11:30:17.786: INFO: Successfully updated pod "annotationupdate6b2b8f4c-a97b-11ea-978f-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:30:19.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5fhvd" for this suite. Jun 8 11:30:33.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:30:33.969: INFO: namespace: e2e-tests-downward-api-5fhvd, resource: bindings, ignored listing per whitelist Jun 8 11:30:33.983: INFO: namespace e2e-tests-downward-api-5fhvd deletion completed in 14.0998137s • [SLOW TEST:20.849 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:30:33.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:30:34.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-779aff32-a97b-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-8bg4x" to be "success or failure" Jun 8 11:30:34.119: INFO: Pod "downwardapi-volume-779aff32-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.630676ms Jun 8 11:30:36.123: INFO: Pod "downwardapi-volume-779aff32-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023425171s Jun 8 11:30:38.127: INFO: Pod "downwardapi-volume-779aff32-a97b-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027800965s STEP: Saw pod success Jun 8 11:30:38.127: INFO: Pod "downwardapi-volume-779aff32-a97b-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:30:38.131: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-779aff32-a97b-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:30:38.210: INFO: Waiting for pod downwardapi-volume-779aff32-a97b-11ea-978f-0242ac110018 to disappear Jun 8 11:30:38.217: INFO: Pod downwardapi-volume-779aff32-a97b-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:30:38.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8bg4x" for this suite. Jun 8 11:30:44.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:30:44.300: INFO: namespace: e2e-tests-downward-api-8bg4x, resource: bindings, ignored listing per whitelist Jun 8 11:30:44.372: INFO: namespace e2e-tests-downward-api-8bg4x deletion completed in 6.151648874s • [SLOW TEST:10.388 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:30:44.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-7dc942a2-a97b-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:30:44.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-7dccd85b-a97b-11ea-978f-0242ac110018" in namespace "e2e-tests-configmap-kxqnf" to be "success or failure" Jun 8 11:30:44.534: INFO: Pod "pod-configmaps-7dccd85b-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.333655ms Jun 8 11:30:46.539: INFO: Pod "pod-configmaps-7dccd85b-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023981458s Jun 8 11:30:48.542: INFO: Pod "pod-configmaps-7dccd85b-a97b-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027470695s STEP: Saw pod success Jun 8 11:30:48.542: INFO: Pod "pod-configmaps-7dccd85b-a97b-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:30:48.545: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-7dccd85b-a97b-11ea-978f-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 8 11:30:48.572: INFO: Waiting for pod pod-configmaps-7dccd85b-a97b-11ea-978f-0242ac110018 to disappear Jun 8 11:30:48.602: INFO: Pod pod-configmaps-7dccd85b-a97b-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:30:48.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kxqnf" for this suite. Jun 8 11:30:54.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:30:54.691: INFO: namespace: e2e-tests-configmap-kxqnf, resource: bindings, ignored listing per whitelist Jun 8 11:30:54.748: INFO: namespace e2e-tests-configmap-kxqnf deletion completed in 6.142145995s • [SLOW TEST:10.376 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:30:54.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-fbcgj [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-fbcgj STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-fbcgj Jun 8 11:30:54.912: INFO: Found 0 stateful pods, waiting for 1 Jun 8 11:31:04.918: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 8 11:31:04.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fbcgj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 11:31:05.164: INFO: stderr: "I0608 11:31:05.039290 1043 log.go:172] (0xc00014c840) (0xc000738640) Create stream\nI0608 11:31:05.039356 1043 log.go:172] (0xc00014c840) (0xc000738640) Stream added, broadcasting: 1\nI0608 11:31:05.042207 1043 log.go:172] (0xc00014c840) Reply frame received for 1\nI0608 11:31:05.042254 1043 log.go:172] (0xc00014c840) (0xc0007386e0) Create stream\nI0608 11:31:05.042280 1043 log.go:172] (0xc00014c840) (0xc0007386e0) Stream added, broadcasting: 3\nI0608 11:31:05.042989 1043 log.go:172] (0xc00014c840) Reply frame received for 3\nI0608 11:31:05.043037 1043 log.go:172] (0xc00014c840) (0xc000738780) Create stream\nI0608 11:31:05.043065 1043 log.go:172] (0xc00014c840) (0xc000738780) Stream added, broadcasting: 5\nI0608 11:31:05.044175 1043 log.go:172] (0xc00014c840) Reply frame received for 5\nI0608 11:31:05.156138 1043 log.go:172] (0xc00014c840) Data frame received for 3\nI0608 11:31:05.156189 1043 log.go:172] (0xc0007386e0) (3) Data frame handling\nI0608 11:31:05.156226 1043 log.go:172] (0xc0007386e0) (3) Data frame sent\nI0608 11:31:05.156245 1043 log.go:172] (0xc00014c840) Data frame received for 3\nI0608 11:31:05.156260 1043 log.go:172] (0xc0007386e0) (3) Data frame handling\nI0608 11:31:05.156311 1043 log.go:172] (0xc00014c840) Data frame received for 5\nI0608 11:31:05.156350 1043 log.go:172] (0xc000738780) (5) Data frame handling\nI0608 11:31:05.158439 1043 log.go:172] (0xc00014c840) Data frame received for 1\nI0608 11:31:05.158475 1043 log.go:172] (0xc000738640) (1) Data frame handling\nI0608 11:31:05.158497 1043 log.go:172] (0xc000738640) (1) Data frame sent\nI0608 11:31:05.158521 1043 log.go:172] (0xc00014c840) (0xc000738640) Stream removed, broadcasting: 1\nI0608 11:31:05.158551 1043 log.go:172] (0xc00014c840) Go away received\nI0608 11:31:05.158857 1043 log.go:172] (0xc00014c840) (0xc000738640) Stream removed, broadcasting: 1\nI0608 11:31:05.158889 1043 log.go:172] (0xc00014c840) (0xc0007386e0) Stream removed, broadcasting: 3\nI0608 11:31:05.158902 1043 log.go:172] (0xc00014c840) (0xc000738780) Stream removed, broadcasting: 5\n" Jun 8 11:31:05.164: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 11:31:05.164: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 8 11:31:05.169: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 8 11:31:15.175: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 8 11:31:15.175: INFO: Waiting for statefulset status.replicas updated to 0 Jun 8 11:31:15.217: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999206s Jun 8 11:31:16.244: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.967572791s Jun 8 11:31:17.663: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.940719081s Jun 8 11:31:18.669: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.521404422s Jun 8 11:31:19.673: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.515804469s Jun 8 11:31:20.679: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.511374497s Jun 8 11:31:21.683: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.505947008s Jun 8 11:31:22.687: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.501709094s Jun 8 11:31:23.691: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.4977768s Jun 8 11:31:24.748: INFO: Verifying statefulset ss doesn't scale past 1 for another 493.544109ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-fbcgj Jun 8 11:31:25.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fbcgj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:31:25.946: INFO: stderr: "I0608 11:31:25.872855 1065 log.go:172] (0xc00014c840) (0xc00078c640) Create stream\nI0608 11:31:25.872945 1065 log.go:172] (0xc00014c840) (0xc00078c640) Stream added, broadcasting: 1\nI0608 11:31:25.876261 1065 log.go:172] (0xc00014c840) Reply frame received for 1\nI0608 11:31:25.876297 1065 log.go:172] (0xc00014c840) (0xc000666dc0) Create stream\nI0608 11:31:25.876307 1065 log.go:172] (0xc00014c840) (0xc000666dc0) Stream added, broadcasting: 3\nI0608 11:31:25.877554 1065 log.go:172] (0xc00014c840) Reply frame received for 3\nI0608 11:31:25.877626 1065 log.go:172] (0xc00014c840) (0xc00078c6e0) Create stream\nI0608 11:31:25.877659 1065 log.go:172] (0xc00014c840) (0xc00078c6e0) Stream added, broadcasting: 5\nI0608 11:31:25.878668 1065 log.go:172] (0xc00014c840) Reply frame received for 5\nI0608 11:31:25.941499 1065 log.go:172] (0xc00014c840) Data frame received for 3\nI0608 11:31:25.941527 1065 log.go:172] (0xc000666dc0) (3) Data frame handling\nI0608 11:31:25.941534 1065 log.go:172] (0xc000666dc0) (3) Data frame sent\nI0608 11:31:25.941540 1065 log.go:172] (0xc00014c840) Data frame received for 3\nI0608 11:31:25.941544 1065 log.go:172] (0xc000666dc0) (3) Data frame handling\nI0608 11:31:25.941553 1065 log.go:172] (0xc00014c840) Data frame received for 5\nI0608 11:31:25.941557 1065 log.go:172] (0xc00078c6e0) (5) Data frame handling\nI0608 11:31:25.942971 1065 log.go:172] (0xc00014c840) Data frame received for 1\nI0608 11:31:25.942995 1065 log.go:172] (0xc00078c640) (1) Data frame handling\nI0608 11:31:25.943015 1065 log.go:172] (0xc00078c640) (1) Data frame sent\nI0608 11:31:25.943033 1065 log.go:172] (0xc00014c840) (0xc00078c640) Stream removed, broadcasting: 1\nI0608 11:31:25.943057 1065 log.go:172] (0xc00014c840) Go away received\nI0608 11:31:25.943239 1065 log.go:172] (0xc00014c840) (0xc00078c640) Stream removed, broadcasting: 1\nI0608 11:31:25.943264 1065 log.go:172] (0xc00014c840) (0xc000666dc0) Stream removed, broadcasting: 3\nI0608 11:31:25.943275 1065 log.go:172] (0xc00014c840) (0xc00078c6e0) Stream removed, broadcasting: 5\n" Jun 8 11:31:25.946: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 8 11:31:25.946: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 8 11:31:25.949: INFO: Found 1 stateful pods, waiting for 3 Jun 8 11:31:35.954: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 8 11:31:35.954: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 8 11:31:35.954: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 8 11:31:35.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fbcgj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 11:31:36.205: INFO: stderr: "I0608 11:31:36.098218 1087 log.go:172] (0xc000138580) (0xc0000eabe0) Create stream\nI0608 11:31:36.098273 1087 log.go:172] (0xc000138580) (0xc0000eabe0) Stream added, broadcasting: 1\nI0608 11:31:36.100801 1087 log.go:172] (0xc000138580) Reply frame received for 1\nI0608 11:31:36.100843 1087 log.go:172] (0xc000138580) (0xc0002c4000) Create stream\nI0608 11:31:36.100857 1087 log.go:172] (0xc000138580) (0xc0002c4000) Stream added, broadcasting: 3\nI0608 11:31:36.102121 1087 log.go:172] (0xc000138580) Reply frame received for 3\nI0608 11:31:36.102160 1087 log.go:172] (0xc000138580) (0xc0002c40a0) Create stream\nI0608 11:31:36.102168 1087 log.go:172] (0xc000138580) (0xc0002c40a0) Stream added, broadcasting: 5\nI0608 11:31:36.103355 1087 log.go:172] (0xc000138580) Reply frame received for 5\nI0608 11:31:36.198279 1087 log.go:172] (0xc000138580) Data frame received for 5\nI0608 11:31:36.198321 1087 log.go:172] (0xc0002c40a0) (5) Data frame handling\nI0608 11:31:36.198350 1087 log.go:172] (0xc000138580) Data frame received for 3\nI0608 11:31:36.198361 1087 log.go:172] (0xc0002c4000) (3) Data frame handling\nI0608 11:31:36.198370 1087 log.go:172] (0xc0002c4000) (3) Data frame sent\nI0608 11:31:36.198376 1087 log.go:172] (0xc000138580) Data frame received for 3\nI0608 11:31:36.198381 1087 log.go:172] (0xc0002c4000) (3) Data frame handling\nI0608 11:31:36.199733 1087 log.go:172] (0xc000138580) Data frame received for 1\nI0608 11:31:36.199765 1087 log.go:172] (0xc0000eabe0) (1) Data frame handling\nI0608 11:31:36.199781 1087 log.go:172] (0xc0000eabe0) (1) Data frame sent\nI0608 11:31:36.199812 1087 log.go:172] (0xc000138580) (0xc0000eabe0) Stream removed, broadcasting: 1\nI0608 11:31:36.199856 1087 log.go:172] (0xc000138580) Go away received\nI0608 11:31:36.200084 1087 log.go:172] (0xc000138580) (0xc0000eabe0) Stream removed, broadcasting: 1\nI0608 11:31:36.200112 1087 log.go:172] (0xc000138580) (0xc0002c4000) Stream removed, broadcasting: 3\nI0608 11:31:36.200126 1087 log.go:172] (0xc000138580) (0xc0002c40a0) Stream removed, broadcasting: 5\n" Jun 8 11:31:36.205: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 11:31:36.205: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 8 11:31:36.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fbcgj ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 11:31:36.479: INFO: stderr: "I0608 11:31:36.336272 1109 log.go:172] (0xc000138840) (0xc00073a640) Create stream\nI0608 11:31:36.336347 1109 log.go:172] (0xc000138840) (0xc00073a640) Stream added, broadcasting: 1\nI0608 11:31:36.338947 1109 log.go:172] (0xc000138840) Reply frame received for 1\nI0608 11:31:36.339006 1109 log.go:172] (0xc000138840) (0xc0005c2b40) Create stream\nI0608 11:31:36.339034 1109 log.go:172] (0xc000138840) (0xc0005c2b40) Stream added, broadcasting: 3\nI0608 11:31:36.340112 1109 log.go:172] (0xc000138840) Reply frame received for 3\nI0608 11:31:36.340134 1109 log.go:172] (0xc000138840) (0xc0001a0000) Create stream\nI0608 11:31:36.340145 1109 log.go:172] (0xc000138840) (0xc0001a0000) Stream added, broadcasting: 5\nI0608 11:31:36.341272 1109 log.go:172] (0xc000138840) Reply frame received for 5\nI0608 11:31:36.471345 1109 log.go:172] (0xc000138840) Data frame received for 3\nI0608 11:31:36.471455 1109 log.go:172] (0xc0005c2b40) (3) Data frame handling\nI0608 11:31:36.471572 1109 log.go:172] (0xc0005c2b40) (3) Data frame sent\nI0608 11:31:36.471588 1109 log.go:172] (0xc000138840) Data frame received for 3\nI0608 11:31:36.471596 1109 log.go:172] (0xc0005c2b40) (3) Data frame handling\nI0608 11:31:36.471810 1109 log.go:172] (0xc000138840) Data frame received for 5\nI0608 11:31:36.471843 1109 log.go:172] (0xc0001a0000) (5) Data frame handling\nI0608 11:31:36.473975 1109 log.go:172] (0xc000138840) Data frame received for 1\nI0608 11:31:36.474003 1109 log.go:172] (0xc00073a640) (1) Data frame handling\nI0608 11:31:36.474030 1109 log.go:172] (0xc00073a640) (1) Data frame sent\nI0608 11:31:36.474047 1109 log.go:172] (0xc000138840) (0xc00073a640) Stream removed, broadcasting: 1\nI0608 11:31:36.474080 1109 log.go:172] (0xc000138840) Go away received\nI0608 11:31:36.474479 1109 log.go:172] (0xc000138840) (0xc00073a640) Stream removed, broadcasting: 1\nI0608 11:31:36.474506 1109 log.go:172] (0xc000138840) (0xc0005c2b40) Stream removed, broadcasting: 3\nI0608 11:31:36.474519 1109 log.go:172] (0xc000138840) (0xc0001a0000) Stream removed, broadcasting: 5\n" Jun 8 11:31:36.479: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 11:31:36.479: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 8 11:31:36.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fbcgj ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 11:31:36.732: INFO: stderr: "I0608 11:31:36.610985 1133 log.go:172] (0xc00015c840) (0xc000705400) Create stream\nI0608 11:31:36.611052 1133 log.go:172] (0xc00015c840) (0xc000705400) Stream added, broadcasting: 1\nI0608 11:31:36.613807 1133 log.go:172] (0xc00015c840) Reply frame received for 1\nI0608 11:31:36.613871 1133 log.go:172] (0xc00015c840) (0xc0002f2000) Create stream\nI0608 11:31:36.613886 1133 log.go:172] (0xc00015c840) (0xc0002f2000) Stream added, broadcasting: 3\nI0608 11:31:36.614901 1133 log.go:172] (0xc00015c840) Reply frame received for 3\nI0608 11:31:36.614946 1133 log.go:172] (0xc00015c840) (0xc0002f20a0) Create stream\nI0608 11:31:36.614959 1133 log.go:172] (0xc00015c840) (0xc0002f20a0) Stream added, broadcasting: 5\nI0608 11:31:36.615901 1133 log.go:172] (0xc00015c840) Reply frame received for 5\nI0608 11:31:36.723401 1133 log.go:172] (0xc00015c840) Data frame received for 3\nI0608 11:31:36.723435 1133 log.go:172] (0xc0002f2000) (3) Data frame handling\nI0608 11:31:36.723463 1133 log.go:172] (0xc0002f2000) (3) Data frame sent\nI0608 11:31:36.723707 1133 log.go:172] (0xc00015c840) Data frame received for 3\nI0608 11:31:36.723720 1133 log.go:172] (0xc0002f2000) (3) Data frame handling\nI0608 11:31:36.723746 1133 log.go:172] (0xc00015c840) Data frame received for 5\nI0608 11:31:36.723781 1133 log.go:172] (0xc0002f20a0) (5) Data frame handling\nI0608 11:31:36.726316 1133 log.go:172] (0xc00015c840) Data frame received for 1\nI0608 11:31:36.726350 1133 log.go:172] (0xc000705400) (1) Data frame handling\nI0608 11:31:36.726374 1133 log.go:172] (0xc000705400) (1) Data frame sent\nI0608 11:31:36.726402 1133 log.go:172] (0xc00015c840) (0xc000705400) Stream removed, broadcasting: 1\nI0608 11:31:36.726620 1133 log.go:172] (0xc00015c840) Go away received\nI0608 11:31:36.726677 1133 log.go:172] (0xc00015c840) (0xc000705400) Stream removed, broadcasting: 1\nI0608 11:31:36.726714 1133 log.go:172] (0xc00015c840) (0xc0002f2000) Stream removed, broadcasting: 3\nI0608 11:31:36.726803 1133 log.go:172] (0xc00015c840) (0xc0002f20a0) Stream removed, broadcasting: 5\n" Jun 8 11:31:36.732: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 11:31:36.732: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 8 11:31:36.732: INFO: Waiting for statefulset status.replicas updated to 0 Jun 8 11:31:36.742: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 8 11:31:46.751: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 8 11:31:46.751: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 8 11:31:46.751: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 8 11:31:46.764: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999588s Jun 8 11:31:47.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993671528s Jun 8 11:31:48.800: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961604629s Jun 8 11:31:49.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.957285376s Jun 8 11:31:50.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.952023688s Jun 8 11:31:51.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.947320661s Jun 8 11:31:52.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.942186283s Jun 8 11:31:53.976: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.91587017s Jun 8 11:31:54.982: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.781623794s Jun 8 11:31:55.986: INFO: Verifying statefulset ss doesn't scale past 3 for another 776.134543ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-fbcgj Jun 8 11:31:56.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fbcgj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:31:57.203: INFO: stderr: "I0608 11:31:57.129663 1157 log.go:172] (0xc000154840) (0xc00067b4a0) Create stream\nI0608 11:31:57.129721 1157 log.go:172] (0xc000154840) (0xc00067b4a0) Stream added, broadcasting: 1\nI0608 11:31:57.132600 1157 log.go:172] (0xc000154840) Reply frame received for 1\nI0608 11:31:57.132794 1157 log.go:172] (0xc000154840) (0xc0005bc000) Create stream\nI0608 11:31:57.132847 1157 log.go:172] (0xc000154840) (0xc0005bc000) Stream added, broadcasting: 3\nI0608 11:31:57.134264 1157 log.go:172] (0xc000154840) Reply frame received for 3\nI0608 11:31:57.134299 1157 log.go:172] (0xc000154840) (0xc000414000) Create stream\nI0608 11:31:57.134313 1157 log.go:172] (0xc000154840) (0xc000414000) Stream added, broadcasting: 5\nI0608 11:31:57.135357 1157 log.go:172] (0xc000154840) Reply frame received for 5\nI0608 11:31:57.195276 1157 log.go:172] (0xc000154840) Data frame received for 5\nI0608 11:31:57.195303 1157 log.go:172] (0xc000414000) (5) Data frame handling\nI0608 11:31:57.195324 1157 log.go:172] (0xc000154840) Data frame received for 3\nI0608 11:31:57.195351 1157 log.go:172] (0xc0005bc000) (3) Data frame handling\nI0608 11:31:57.195379 1157 log.go:172] (0xc0005bc000) (3) Data frame sent\nI0608 11:31:57.195395 1157 log.go:172] (0xc000154840) Data frame received for 3\nI0608 11:31:57.195404 1157 log.go:172] (0xc0005bc000) (3) Data frame handling\nI0608 11:31:57.197370 1157 log.go:172] (0xc000154840) Data frame received for 1\nI0608 11:31:57.197409 1157 log.go:172] (0xc00067b4a0) (1) Data frame handling\nI0608 11:31:57.197432 1157 log.go:172] (0xc00067b4a0) (1) Data frame sent\nI0608 11:31:57.197456 1157 log.go:172] (0xc000154840) (0xc00067b4a0) Stream removed, broadcasting: 1\nI0608 11:31:57.197537 1157 log.go:172] (0xc000154840) Go away received\nI0608 11:31:57.197707 1157 log.go:172] (0xc000154840) (0xc00067b4a0) Stream removed, broadcasting: 1\nI0608 11:31:57.197743 1157 log.go:172] (0xc000154840) (0xc0005bc000) Stream removed, broadcasting: 3\nI0608 11:31:57.197763 1157 log.go:172] (0xc000154840) (0xc000414000) Stream removed, broadcasting: 5\n" Jun 8 11:31:57.203: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 8 11:31:57.203: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 8 11:31:57.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fbcgj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:31:57.546: INFO: stderr: "I0608 11:31:57.338731 1179 log.go:172] (0xc000774160) (0xc0006d8640) Create stream\nI0608 11:31:57.338793 1179 log.go:172] (0xc000774160) (0xc0006d8640) Stream added, broadcasting: 1\nI0608 11:31:57.341283 1179 log.go:172] (0xc000774160) Reply frame received for 1\nI0608 11:31:57.341311 1179 log.go:172] (0xc000774160) (0xc0006d86e0) Create stream\nI0608 11:31:57.341318 1179 log.go:172] (0xc000774160) (0xc0006d86e0) Stream added, broadcasting: 3\nI0608 11:31:57.341969 1179 log.go:172] (0xc000774160) Reply frame received for 3\nI0608 11:31:57.341996 1179 log.go:172] (0xc000774160) (0xc000494dc0) Create stream\nI0608 11:31:57.342008 1179 log.go:172] (0xc000774160) (0xc000494dc0) Stream added, broadcasting: 5\nI0608 11:31:57.342627 1179 log.go:172] (0xc000774160) Reply frame received for 5\nI0608 11:31:57.540935 1179 log.go:172] (0xc000774160) Data frame received for 5\nI0608 11:31:57.540954 1179 log.go:172] (0xc000494dc0) (5) Data frame handling\nI0608 11:31:57.540984 1179 log.go:172] (0xc000774160) Data frame received for 3\nI0608 11:31:57.541019 1179 log.go:172] (0xc0006d86e0) (3) Data frame handling\nI0608 11:31:57.541039 1179 log.go:172] (0xc0006d86e0) (3) Data frame sent\nI0608 11:31:57.541052 1179 log.go:172] (0xc000774160) Data frame received for 3\nI0608 11:31:57.541067 1179 log.go:172] (0xc0006d86e0) (3) Data frame handling\nI0608 11:31:57.542588 1179 log.go:172] (0xc000774160) Data frame received for 1\nI0608 11:31:57.542611 1179 log.go:172] (0xc0006d8640) (1) Data frame handling\nI0608 11:31:57.542636 1179 log.go:172] (0xc0006d8640) (1) Data frame sent\nI0608 11:31:57.542658 1179 log.go:172] (0xc000774160) (0xc0006d8640) Stream removed, broadcasting: 1\nI0608 11:31:57.542697 1179 log.go:172] (0xc000774160) Go away received\nI0608 11:31:57.542830 1179 log.go:172] (0xc000774160) (0xc0006d8640) Stream removed, broadcasting: 1\nI0608 11:31:57.542848 1179 log.go:172] (0xc000774160) (0xc0006d86e0) Stream removed, broadcasting: 3\nI0608 11:31:57.542860 1179 log.go:172] (0xc000774160) (0xc000494dc0) Stream removed, broadcasting: 5\n" Jun 8 11:31:57.546: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 8 11:31:57.546: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 8 11:31:57.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fbcgj ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:31:57.758: INFO: stderr: "I0608 11:31:57.676450 1201 log.go:172] (0xc00016c840) (0xc000752640) Create stream\nI0608 11:31:57.676506 1201 log.go:172] (0xc00016c840) (0xc000752640) Stream added, broadcasting: 1\nI0608 11:31:57.679716 1201 log.go:172] (0xc00016c840) Reply frame received for 1\nI0608 11:31:57.679775 1201 log.go:172] (0xc00016c840) (0xc000558d20) Create stream\nI0608 11:31:57.679794 1201 log.go:172] (0xc00016c840) (0xc000558d20) Stream added, broadcasting: 3\nI0608 11:31:57.680779 1201 log.go:172] (0xc00016c840) Reply frame received for 3\nI0608 11:31:57.680818 1201 log.go:172] (0xc00016c840) (0xc0007526e0) Create stream\nI0608 11:31:57.680829 1201 log.go:172] (0xc00016c840) (0xc0007526e0) Stream added, broadcasting: 5\nI0608 11:31:57.681944 1201 log.go:172] (0xc00016c840) Reply frame received for 5\nI0608 11:31:57.748845 1201 log.go:172] (0xc00016c840) Data frame received for 5\nI0608 11:31:57.748882 1201 log.go:172] (0xc0007526e0) (5) Data frame handling\nI0608 11:31:57.748911 1201 log.go:172] (0xc00016c840) Data frame received for 3\nI0608 11:31:57.748927 1201 log.go:172] (0xc000558d20) (3) Data frame handling\nI0608 11:31:57.748940 1201 log.go:172] (0xc000558d20) (3) Data frame sent\nI0608 11:31:57.748952 1201 log.go:172] (0xc00016c840) Data frame received for 3\nI0608 11:31:57.748962 1201 log.go:172] (0xc000558d20) (3) Data frame handling\nI0608 11:31:57.750584 1201 log.go:172] (0xc00016c840) Data frame received for 1\nI0608 11:31:57.750609 1201 log.go:172] (0xc000752640) (1) Data frame handling\nI0608 11:31:57.750625 1201 log.go:172] (0xc000752640) (1) Data frame sent\nI0608 11:31:57.750644 1201 log.go:172] (0xc00016c840) (0xc000752640) Stream removed, broadcasting: 1\nI0608 11:31:57.750696 1201 log.go:172] (0xc00016c840) Go away received\nI0608 11:31:57.750873 1201 log.go:172] (0xc00016c840) (0xc000752640) Stream removed, broadcasting: 1\nI0608 11:31:57.750889 1201 log.go:172] (0xc00016c840) (0xc000558d20) Stream removed, broadcasting: 3\nI0608 11:31:57.750903 1201 log.go:172] (0xc00016c840) (0xc0007526e0) Stream removed, broadcasting: 5\n" Jun 8 11:31:57.758: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 8 11:31:57.758: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 8 11:31:57.758: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 8 11:32:27.868: INFO: Deleting all statefulset in ns e2e-tests-statefulset-fbcgj Jun 8 11:32:27.871: INFO: Scaling statefulset ss to 0 Jun 8 11:32:27.878: INFO: Waiting for statefulset status.replicas updated to 0 Jun 8 11:32:27.881: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:32:27.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-fbcgj" for this suite. Jun 8 11:32:34.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:32:34.056: INFO: namespace: e2e-tests-statefulset-fbcgj, resource: bindings, ignored listing per whitelist Jun 8 11:32:34.107: INFO: namespace e2e-tests-statefulset-fbcgj deletion completed in 6.210913017s • [SLOW TEST:99.358 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:32:34.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:32:38.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vzt82" for this suite. Jun 8 11:32:44.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:32:44.820: INFO: namespace: e2e-tests-emptydir-wrapper-vzt82, resource: bindings, ignored listing per whitelist Jun 8 11:32:44.855: INFO: namespace e2e-tests-emptydir-wrapper-vzt82 deletion completed in 6.096225593s • [SLOW TEST:10.749 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:32:44.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 8 11:32:44.962: INFO: Waiting up to 5m0s for pod "downward-api-c599b4fe-a97b-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-5mwh8" to be "success or failure" Jun 8 11:32:44.988: INFO: Pod "downward-api-c599b4fe-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.183988ms Jun 8 11:32:46.992: INFO: Pod "downward-api-c599b4fe-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029134375s Jun 8 11:32:48.995: INFO: Pod "downward-api-c599b4fe-a97b-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032790953s STEP: Saw pod success Jun 8 11:32:48.995: INFO: Pod "downward-api-c599b4fe-a97b-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:32:48.998: INFO: Trying to get logs from node hunter-worker2 pod downward-api-c599b4fe-a97b-11ea-978f-0242ac110018 container dapi-container: STEP: delete the pod Jun 8 11:32:49.146: INFO: Waiting for pod downward-api-c599b4fe-a97b-11ea-978f-0242ac110018 to disappear Jun 8 11:32:49.203: INFO: Pod downward-api-c599b4fe-a97b-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:32:49.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5mwh8" for this suite. Jun 8 11:32:55.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:32:55.371: INFO: namespace: e2e-tests-downward-api-5mwh8, resource: bindings, ignored listing per whitelist Jun 8 11:32:55.377: INFO: namespace e2e-tests-downward-api-5mwh8 deletion completed in 6.17009104s • [SLOW TEST:10.522 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:32:55.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 8 11:33:02.572: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:33:03.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-78zvj" for this suite. Jun 8 11:33:25.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:33:25.733: INFO: namespace: e2e-tests-replicaset-78zvj, resource: bindings, ignored listing per whitelist Jun 8 11:33:25.771: INFO: namespace e2e-tests-replicaset-78zvj deletion completed in 22.120216273s • [SLOW TEST:30.393 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:33:25.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-de0069a2-a97b-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:33:25.907: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de026e75-a97b-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-nsdgd" to be "success or failure" Jun 8 11:33:25.926: INFO: Pod "pod-projected-configmaps-de026e75-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.242107ms Jun 8 11:33:28.390: INFO: Pod "pod-projected-configmaps-de026e75-a97b-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483052873s Jun 8 11:33:30.395: INFO: Pod "pod-projected-configmaps-de026e75-a97b-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.487878302s Jun 8 11:33:32.399: INFO: Pod "pod-projected-configmaps-de026e75-a97b-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.492222145s STEP: Saw pod success Jun 8 11:33:32.399: INFO: Pod "pod-projected-configmaps-de026e75-a97b-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:33:32.402: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-de026e75-a97b-11ea-978f-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 8 11:33:32.422: INFO: Waiting for pod pod-projected-configmaps-de026e75-a97b-11ea-978f-0242ac110018 to disappear Jun 8 11:33:32.426: INFO: Pod pod-projected-configmaps-de026e75-a97b-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:33:32.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nsdgd" for this suite. Jun 8 11:33:38.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:33:38.510: INFO: namespace: e2e-tests-projected-nsdgd, resource: bindings, ignored listing per whitelist Jun 8 11:33:38.526: INFO: namespace e2e-tests-projected-nsdgd deletion completed in 6.095736608s • [SLOW TEST:12.755 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:33:38.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:33:38.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 8 11:33:38.773: INFO: stderr: "" Jun 8 11:33:38.773: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:33:38.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8cbp9" for this suite. Jun 8 11:33:44.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:33:44.891: INFO: namespace: e2e-tests-kubectl-8cbp9, resource: bindings, ignored listing per whitelist Jun 8 11:33:44.894: INFO: namespace e2e-tests-kubectl-8cbp9 deletion completed in 6.11631652s • [SLOW TEST:6.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:33:44.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:33:49.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5n4v6" for this suite. Jun 8 11:34:35.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:34:35.166: INFO: namespace: e2e-tests-kubelet-test-5n4v6, resource: bindings, ignored listing per whitelist Jun 8 11:34:35.188: INFO: namespace e2e-tests-kubelet-test-5n4v6 deletion completed in 46.11477346s • [SLOW TEST:50.294 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:34:35.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-075f5510-a97c-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:34:35.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-076175e3-a97c-11ea-978f-0242ac110018" in namespace "e2e-tests-configmap-c522p" to be "success or failure" Jun 8 11:34:35.355: INFO: Pod "pod-configmaps-076175e3-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 39.507129ms Jun 8 11:34:37.397: INFO: Pod "pod-configmaps-076175e3-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081391399s Jun 8 11:34:39.401: INFO: Pod "pod-configmaps-076175e3-a97c-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085199153s STEP: Saw pod success Jun 8 11:34:39.401: INFO: Pod "pod-configmaps-076175e3-a97c-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:34:39.404: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-076175e3-a97c-11ea-978f-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 8 11:34:39.423: INFO: Waiting for pod pod-configmaps-076175e3-a97c-11ea-978f-0242ac110018 to disappear Jun 8 11:34:39.442: INFO: Pod pod-configmaps-076175e3-a97c-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:34:39.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-c522p" for this suite. Jun 8 11:34:45.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:34:45.554: INFO: namespace: e2e-tests-configmap-c522p, resource: bindings, ignored listing per whitelist Jun 8 11:34:45.558: INFO: namespace e2e-tests-configmap-c522p deletion completed in 6.095718508s • [SLOW TEST:10.370 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:34:45.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-0d9790d6-a97c-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:34:45.767: INFO: Waiting up to 5m0s for pod "pod-secrets-0d9ac985-a97c-11ea-978f-0242ac110018" in namespace "e2e-tests-secrets-vkvjz" to be "success or failure" Jun 8 11:34:45.769: INFO: Pod "pod-secrets-0d9ac985-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.580059ms Jun 8 11:34:47.774: INFO: Pod "pod-secrets-0d9ac985-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006806191s Jun 8 11:34:49.786: INFO: Pod "pod-secrets-0d9ac985-a97c-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019663526s STEP: Saw pod success Jun 8 11:34:49.786: INFO: Pod "pod-secrets-0d9ac985-a97c-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:34:49.789: INFO: Trying to get logs from node hunter-worker pod pod-secrets-0d9ac985-a97c-11ea-978f-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 8 11:34:49.810: INFO: Waiting for pod pod-secrets-0d9ac985-a97c-11ea-978f-0242ac110018 to disappear Jun 8 11:34:49.815: INFO: Pod pod-secrets-0d9ac985-a97c-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:34:49.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vkvjz" for this suite. Jun 8 11:34:55.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:34:55.885: INFO: namespace: e2e-tests-secrets-vkvjz, resource: bindings, ignored listing per whitelist Jun 8 11:34:56.088: INFO: namespace e2e-tests-secrets-vkvjz deletion completed in 6.270686959s • [SLOW TEST:10.529 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:34:56.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-13f47426-a97c-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:34:56.486: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-140016a5-a97c-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-cmklw" to be "success or failure" Jun 8 11:34:56.491: INFO: Pod "pod-projected-configmaps-140016a5-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.018787ms Jun 8 11:34:58.495: INFO: Pod "pod-projected-configmaps-140016a5-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008662221s Jun 8 11:35:00.500: INFO: Pod "pod-projected-configmaps-140016a5-a97c-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013720697s STEP: Saw pod success Jun 8 11:35:00.500: INFO: Pod "pod-projected-configmaps-140016a5-a97c-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:35:00.504: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-140016a5-a97c-11ea-978f-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 8 11:35:00.532: INFO: Waiting for pod pod-projected-configmaps-140016a5-a97c-11ea-978f-0242ac110018 to disappear Jun 8 11:35:00.548: INFO: Pod pod-projected-configmaps-140016a5-a97c-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:35:00.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cmklw" for this suite. Jun 8 11:35:06.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:35:06.607: INFO: namespace: e2e-tests-projected-cmklw, resource: bindings, ignored listing per whitelist Jun 8 11:35:06.643: INFO: namespace e2e-tests-projected-cmklw deletion completed in 6.092009768s • [SLOW TEST:10.555 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:35:06.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:35:06.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a22e46a-a97c-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-7xzjh" to be "success or failure" Jun 8 11:35:06.788: INFO: Pod "downwardapi-volume-1a22e46a-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.840728ms Jun 8 11:35:08.793: INFO: Pod "downwardapi-volume-1a22e46a-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008865584s Jun 8 11:35:10.797: INFO: Pod "downwardapi-volume-1a22e46a-a97c-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01285969s STEP: Saw pod success Jun 8 11:35:10.797: INFO: Pod "downwardapi-volume-1a22e46a-a97c-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:35:10.800: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1a22e46a-a97c-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:35:10.843: INFO: Waiting for pod downwardapi-volume-1a22e46a-a97c-11ea-978f-0242ac110018 to disappear Jun 8 11:35:10.848: INFO: Pod downwardapi-volume-1a22e46a-a97c-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:35:10.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7xzjh" for this suite. Jun 8 11:35:16.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:35:17.009: INFO: namespace: e2e-tests-downward-api-7xzjh, resource: bindings, ignored listing per whitelist Jun 8 11:35:17.040: INFO: namespace e2e-tests-downward-api-7xzjh deletion completed in 6.185213797s • [SLOW TEST:10.397 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:35:17.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-205164fc-a97c-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:35:17.158: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2052158a-a97c-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-q6wrz" to be "success or failure" Jun 8 11:35:17.174: INFO: Pod "pod-projected-configmaps-2052158a-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.124092ms Jun 8 11:35:19.178: INFO: Pod "pod-projected-configmaps-2052158a-a97c-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019657418s Jun 8 11:35:21.188: INFO: Pod "pod-projected-configmaps-2052158a-a97c-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029267201s STEP: Saw pod success Jun 8 11:35:21.188: INFO: Pod "pod-projected-configmaps-2052158a-a97c-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:35:21.191: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-2052158a-a97c-11ea-978f-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 8 11:35:21.339: INFO: Waiting for pod pod-projected-configmaps-2052158a-a97c-11ea-978f-0242ac110018 to disappear Jun 8 11:35:21.468: INFO: Pod pod-projected-configmaps-2052158a-a97c-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:35:21.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q6wrz" for this suite. Jun 8 11:35:27.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:35:27.530: INFO: namespace: e2e-tests-projected-q6wrz, resource: bindings, ignored listing per whitelist Jun 8 11:35:27.573: INFO: namespace e2e-tests-projected-q6wrz deletion completed in 6.102216737s • [SLOW TEST:10.533 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:35:27.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jun 8 11:35:27.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-brl8x' Jun 8 11:35:30.425: INFO: stderr: "" Jun 8 11:35:30.425: INFO: stdout: "pod/pause created\n" Jun 8 11:35:30.425: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 8 11:35:30.425: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-brl8x" to be "running and ready" Jun 8 11:35:30.463: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 38.361098ms Jun 8 11:35:32.468: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042589577s Jun 8 11:35:34.471: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.046016898s Jun 8 11:35:34.471: INFO: Pod "pause" satisfied condition "running and ready" Jun 8 11:35:34.471: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jun 8 11:35:34.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-brl8x' Jun 8 11:35:34.576: INFO: stderr: "" Jun 8 11:35:34.576: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 8 11:35:34.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-brl8x' Jun 8 11:35:34.674: INFO: stderr: "" Jun 8 11:35:34.674: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 8 11:35:34.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-brl8x' Jun 8 11:35:34.795: INFO: stderr: "" Jun 8 11:35:34.795: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 8 11:35:34.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-brl8x' Jun 8 11:35:34.893: INFO: stderr: "" Jun 8 11:35:34.893: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jun 8 11:35:34.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-brl8x' Jun 8 11:35:35.038: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 11:35:35.038: INFO: stdout: "pod \"pause\" force deleted\n" Jun 8 11:35:35.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-brl8x' Jun 8 11:35:35.153: INFO: stderr: "No resources found.\n" Jun 8 11:35:35.153: INFO: stdout: "" Jun 8 11:35:35.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-brl8x -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 8 11:35:35.252: INFO: stderr: "" Jun 8 11:35:35.252: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:35:35.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-brl8x" for this suite. Jun 8 11:35:41.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:35:41.444: INFO: namespace: e2e-tests-kubectl-brl8x, resource: bindings, ignored listing per whitelist Jun 8 11:35:41.501: INFO: namespace e2e-tests-kubectl-brl8x deletion completed in 6.245056308s • [SLOW TEST:13.927 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:35:41.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0608 11:35:42.666730 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 8 11:35:42.666: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:35:42.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9k5mr" for this suite. Jun 8 11:35:48.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:35:48.733: INFO: namespace: e2e-tests-gc-9k5mr, resource: bindings, ignored listing per whitelist Jun 8 11:35:48.795: INFO: namespace e2e-tests-gc-9k5mr deletion completed in 6.125307672s • [SLOW TEST:7.294 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:35:48.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:35:48.871: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:35:52.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-crq4l" for this suite. Jun 8 11:36:45.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:36:46.003: INFO: namespace: e2e-tests-pods-crq4l, resource: bindings, ignored listing per whitelist Jun 8 11:36:46.060: INFO: namespace e2e-tests-pods-crq4l deletion completed in 53.059201709s • [SLOW TEST:57.265 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:36:46.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pc2xq A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-pc2xq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pc2xq A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-pc2xq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pc2xq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-pc2xq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pc2xq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pc2xq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-pc2xq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pc2xq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-pc2xq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pc2xq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 224.90.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.90.224_udp@PTR;check="$$(dig +tcp +noall +answer +search 224.90.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.90.224_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pc2xq A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-pc2xq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pc2xq A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pc2xq.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pc2xq.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pc2xq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-pc2xq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pc2xq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-pc2xq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pc2xq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 224.90.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.90.224_udp@PTR;check="$$(dig +tcp +noall +answer +search 224.90.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.90.224_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 8 11:36:58.414: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:36:58.446: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:36:58.454: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:36:58.456: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:36:58.458: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:36:58.460: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:36:58.463: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:36:58.464: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:36:58.467: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:36:58.483: INFO: Lookups using e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018 failed for: [wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pc2xq jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc] Jun 8 11:37:03.525: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:03.691: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:03.694: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:03.696: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:03.700: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:03.704: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:03.706: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:03.709: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:03.712: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:03.727: INFO: Lookups using e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018 failed for: [wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pc2xq jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc] Jun 8 11:37:08.492: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:08.715: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:08.718: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:08.722: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:08.724: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:08.727: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:08.729: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:08.731: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:08.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:08.746: INFO: Lookups using e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018 failed for: [wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pc2xq jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc] Jun 8 11:37:13.987: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:14.812: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:14.815: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:14.818: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:14.821: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:14.823: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:14.826: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:14.829: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:14.831: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:14.846: INFO: Lookups using e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018 failed for: [wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pc2xq jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc] Jun 8 11:37:18.491: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:18.532: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:18.534: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:18.536: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:18.539: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:18.541: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:18.543: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:18.547: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:18.550: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:18.576: INFO: Lookups using e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018 failed for: [wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pc2xq jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc] Jun 8 11:37:24.149: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:24.895: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:24.898: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:24.901: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:24.904: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:24.906: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:24.909: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:24.912: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:24.914: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc from pod e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018: the server could not find the requested resource (get pods dns-test-561fe834-a97c-11ea-978f-0242ac110018) Jun 8 11:37:24.933: INFO: Lookups using e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018 failed for: [wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pc2xq jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq jessie_udp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@dns-test-service.e2e-tests-dns-pc2xq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc2xq.svc] Jun 8 11:37:28.788: INFO: DNS probes using e2e-tests-dns-pc2xq/dns-test-561fe834-a97c-11ea-978f-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:37:30.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-pc2xq" for this suite. Jun 8 11:37:36.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:37:36.133: INFO: namespace: e2e-tests-dns-pc2xq, resource: bindings, ignored listing per whitelist Jun 8 11:37:36.215: INFO: namespace e2e-tests-dns-pc2xq deletion completed in 6.132203945s • [SLOW TEST:50.155 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:37:36.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:37:40.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-x5rrs" for this suite. Jun 8 11:37:46.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:37:46.500: INFO: namespace: e2e-tests-kubelet-test-x5rrs, resource: bindings, ignored listing per whitelist Jun 8 11:37:46.552: INFO: namespace e2e-tests-kubelet-test-x5rrs deletion completed in 6.106793953s • [SLOW TEST:10.337 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:37:46.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-79mxs [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-79mxs STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-79mxs Jun 8 11:37:46.672: INFO: Found 0 stateful pods, waiting for 1 Jun 8 11:37:56.677: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 8 11:37:56.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 11:37:56.945: INFO: stderr: "I0608 11:37:56.824633 1424 log.go:172] (0xc00074e370) (0xc00076c640) Create stream\nI0608 11:37:56.824703 1424 log.go:172] (0xc00074e370) (0xc00076c640) Stream added, broadcasting: 1\nI0608 11:37:56.827457 1424 log.go:172] (0xc00074e370) Reply frame received for 1\nI0608 11:37:56.827493 1424 log.go:172] (0xc00074e370) (0xc000680c80) Create stream\nI0608 11:37:56.827505 1424 log.go:172] (0xc00074e370) (0xc000680c80) Stream added, broadcasting: 3\nI0608 11:37:56.828451 1424 log.go:172] (0xc00074e370) Reply frame received for 3\nI0608 11:37:56.828475 1424 log.go:172] (0xc00074e370) (0xc00076c6e0) Create stream\nI0608 11:37:56.828481 1424 log.go:172] (0xc00074e370) (0xc00076c6e0) Stream added, broadcasting: 5\nI0608 11:37:56.829521 1424 log.go:172] (0xc00074e370) Reply frame received for 5\nI0608 11:37:56.936537 1424 log.go:172] (0xc00074e370) Data frame received for 3\nI0608 11:37:56.936645 1424 log.go:172] (0xc000680c80) (3) Data frame handling\nI0608 11:37:56.936693 1424 log.go:172] (0xc000680c80) (3) Data frame sent\nI0608 11:37:56.936799 1424 log.go:172] (0xc00074e370) Data frame received for 3\nI0608 11:37:56.936822 1424 log.go:172] (0xc000680c80) (3) Data frame handling\nI0608 11:37:56.936851 1424 log.go:172] (0xc00074e370) Data frame received for 5\nI0608 11:37:56.936881 1424 log.go:172] (0xc00076c6e0) (5) Data frame handling\nI0608 11:37:56.939043 1424 log.go:172] (0xc00074e370) Data frame received for 1\nI0608 11:37:56.939071 1424 log.go:172] (0xc00076c640) (1) Data frame handling\nI0608 11:37:56.939088 1424 log.go:172] (0xc00076c640) (1) Data frame sent\nI0608 11:37:56.939348 1424 log.go:172] (0xc00074e370) (0xc00076c640) Stream removed, broadcasting: 1\nI0608 11:37:56.939484 1424 log.go:172] (0xc00074e370) Go away received\nI0608 11:37:56.939722 1424 log.go:172] (0xc00074e370) (0xc00076c640) Stream removed, broadcasting: 1\nI0608 11:37:56.939755 1424 log.go:172] (0xc00074e370) (0xc000680c80) Stream removed, broadcasting: 3\nI0608 11:37:56.939777 1424 log.go:172] (0xc00074e370) (0xc00076c6e0) Stream removed, broadcasting: 5\n" Jun 8 11:37:56.945: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 11:37:56.945: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 8 11:37:56.949: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 8 11:38:06.955: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 8 11:38:06.955: INFO: Waiting for statefulset status.replicas updated to 0 Jun 8 11:38:06.970: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:06.970: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:06.970: INFO: Jun 8 11:38:06.970: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 8 11:38:07.993: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996104925s Jun 8 11:38:09.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972372117s Jun 8 11:38:10.238: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.733399738s Jun 8 11:38:11.243: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.727731851s Jun 8 11:38:12.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.723039173s Jun 8 11:38:13.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.703427229s Jun 8 11:38:14.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.699431808s Jun 8 11:38:15.347: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.624906925s Jun 8 11:38:16.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 618.91964ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-79mxs Jun 8 11:38:17.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:38:17.656: INFO: stderr: "I0608 11:38:17.585923 1447 log.go:172] (0xc00013a840) (0xc00066f4a0) Create stream\nI0608 11:38:17.585973 1447 log.go:172] (0xc00013a840) (0xc00066f4a0) Stream added, broadcasting: 1\nI0608 11:38:17.588573 1447 log.go:172] (0xc00013a840) Reply frame received for 1\nI0608 11:38:17.588642 1447 log.go:172] (0xc00013a840) (0xc000344000) Create stream\nI0608 11:38:17.588663 1447 log.go:172] (0xc00013a840) (0xc000344000) Stream added, broadcasting: 3\nI0608 11:38:17.589759 1447 log.go:172] (0xc00013a840) Reply frame received for 3\nI0608 11:38:17.589788 1447 log.go:172] (0xc00013a840) (0xc0003a4000) Create stream\nI0608 11:38:17.589799 1447 log.go:172] (0xc00013a840) (0xc0003a4000) Stream added, broadcasting: 5\nI0608 11:38:17.590544 1447 log.go:172] (0xc00013a840) Reply frame received for 5\nI0608 11:38:17.650564 1447 log.go:172] (0xc00013a840) Data frame received for 5\nI0608 11:38:17.650603 1447 log.go:172] (0xc0003a4000) (5) Data frame handling\nI0608 11:38:17.650626 1447 log.go:172] (0xc00013a840) Data frame received for 3\nI0608 11:38:17.650634 1447 log.go:172] (0xc000344000) (3) Data frame handling\nI0608 11:38:17.650644 1447 log.go:172] (0xc000344000) (3) Data frame sent\nI0608 11:38:17.650653 1447 log.go:172] (0xc00013a840) Data frame received for 3\nI0608 11:38:17.650660 1447 log.go:172] (0xc000344000) (3) Data frame handling\nI0608 11:38:17.651549 1447 log.go:172] (0xc00013a840) Data frame received for 1\nI0608 11:38:17.651575 1447 log.go:172] (0xc00066f4a0) (1) Data frame handling\nI0608 11:38:17.651600 1447 log.go:172] (0xc00066f4a0) (1) Data frame sent\nI0608 11:38:17.651627 1447 log.go:172] (0xc00013a840) (0xc00066f4a0) Stream removed, broadcasting: 1\nI0608 11:38:17.651658 1447 log.go:172] (0xc00013a840) Go away received\nI0608 11:38:17.651824 1447 log.go:172] (0xc00013a840) (0xc00066f4a0) Stream removed, broadcasting: 1\nI0608 11:38:17.651844 1447 log.go:172] (0xc00013a840) (0xc000344000) Stream removed, broadcasting: 3\nI0608 11:38:17.651857 1447 log.go:172] (0xc00013a840) (0xc0003a4000) Stream removed, broadcasting: 5\n" Jun 8 11:38:17.656: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 8 11:38:17.656: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 8 11:38:17.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:38:17.838: INFO: stderr: "I0608 11:38:17.770791 1469 log.go:172] (0xc000714210) (0xc00073c5a0) Create stream\nI0608 11:38:17.770837 1469 log.go:172] (0xc000714210) (0xc00073c5a0) Stream added, broadcasting: 1\nI0608 11:38:17.772738 1469 log.go:172] (0xc000714210) Reply frame received for 1\nI0608 11:38:17.772764 1469 log.go:172] (0xc000714210) (0xc00048abe0) Create stream\nI0608 11:38:17.772772 1469 log.go:172] (0xc000714210) (0xc00048abe0) Stream added, broadcasting: 3\nI0608 11:38:17.773606 1469 log.go:172] (0xc000714210) Reply frame received for 3\nI0608 11:38:17.773646 1469 log.go:172] (0xc000714210) (0xc000410000) Create stream\nI0608 11:38:17.773662 1469 log.go:172] (0xc000714210) (0xc000410000) Stream added, broadcasting: 5\nI0608 11:38:17.774247 1469 log.go:172] (0xc000714210) Reply frame received for 5\nI0608 11:38:17.832163 1469 log.go:172] (0xc000714210) Data frame received for 5\nI0608 11:38:17.832204 1469 log.go:172] (0xc000410000) (5) Data frame handling\nI0608 11:38:17.832212 1469 log.go:172] (0xc000410000) (5) Data frame sent\nI0608 11:38:17.832218 1469 log.go:172] (0xc000714210) Data frame received for 5\nI0608 11:38:17.832227 1469 log.go:172] (0xc000410000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0608 11:38:17.832250 1469 log.go:172] (0xc000714210) Data frame received for 3\nI0608 11:38:17.832259 1469 log.go:172] (0xc00048abe0) (3) Data frame handling\nI0608 11:38:17.832266 1469 log.go:172] (0xc00048abe0) (3) Data frame sent\nI0608 11:38:17.832272 1469 log.go:172] (0xc000714210) Data frame received for 3\nI0608 11:38:17.832284 1469 log.go:172] (0xc00048abe0) (3) Data frame handling\nI0608 11:38:17.833700 1469 log.go:172] (0xc000714210) Data frame received for 1\nI0608 11:38:17.833725 1469 log.go:172] (0xc00073c5a0) (1) Data frame handling\nI0608 11:38:17.833740 1469 log.go:172] (0xc00073c5a0) (1) Data frame sent\nI0608 11:38:17.833748 1469 log.go:172] (0xc000714210) (0xc00073c5a0) Stream removed, broadcasting: 1\nI0608 11:38:17.833759 1469 log.go:172] (0xc000714210) Go away received\nI0608 11:38:17.833996 1469 log.go:172] (0xc000714210) (0xc00073c5a0) Stream removed, broadcasting: 1\nI0608 11:38:17.834016 1469 log.go:172] (0xc000714210) (0xc00048abe0) Stream removed, broadcasting: 3\nI0608 11:38:17.834025 1469 log.go:172] (0xc000714210) (0xc000410000) Stream removed, broadcasting: 5\n" Jun 8 11:38:17.838: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 8 11:38:17.838: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 8 11:38:17.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:38:18.014: INFO: stderr: "I0608 11:38:17.952311 1491 log.go:172] (0xc0007f82c0) (0xc0006c2640) Create stream\nI0608 11:38:17.952378 1491 log.go:172] (0xc0007f82c0) (0xc0006c2640) Stream added, broadcasting: 1\nI0608 11:38:17.954803 1491 log.go:172] (0xc0007f82c0) Reply frame received for 1\nI0608 11:38:17.954843 1491 log.go:172] (0xc0007f82c0) (0xc00065ac80) Create stream\nI0608 11:38:17.954854 1491 log.go:172] (0xc0007f82c0) (0xc00065ac80) Stream added, broadcasting: 3\nI0608 11:38:17.955847 1491 log.go:172] (0xc0007f82c0) Reply frame received for 3\nI0608 11:38:17.955886 1491 log.go:172] (0xc0007f82c0) (0xc0006c26e0) Create stream\nI0608 11:38:17.955908 1491 log.go:172] (0xc0007f82c0) (0xc0006c26e0) Stream added, broadcasting: 5\nI0608 11:38:17.956785 1491 log.go:172] (0xc0007f82c0) Reply frame received for 5\nI0608 11:38:18.009788 1491 log.go:172] (0xc0007f82c0) Data frame received for 5\nI0608 11:38:18.009819 1491 log.go:172] (0xc0006c26e0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0608 11:38:18.009860 1491 log.go:172] (0xc0007f82c0) Data frame received for 3\nI0608 11:38:18.009908 1491 log.go:172] (0xc00065ac80) (3) Data frame handling\nI0608 11:38:18.009942 1491 log.go:172] (0xc00065ac80) (3) Data frame sent\nI0608 11:38:18.009963 1491 log.go:172] (0xc0007f82c0) Data frame received for 3\nI0608 11:38:18.009987 1491 log.go:172] (0xc0006c26e0) (5) Data frame sent\nI0608 11:38:18.010012 1491 log.go:172] (0xc0007f82c0) Data frame received for 5\nI0608 11:38:18.010019 1491 log.go:172] (0xc0006c26e0) (5) Data frame handling\nI0608 11:38:18.010036 1491 log.go:172] (0xc00065ac80) (3) Data frame handling\nI0608 11:38:18.011370 1491 log.go:172] (0xc0007f82c0) Data frame received for 1\nI0608 11:38:18.011386 1491 log.go:172] (0xc0006c2640) (1) Data frame handling\nI0608 11:38:18.011393 1491 log.go:172] (0xc0006c2640) (1) Data frame sent\nI0608 11:38:18.011401 1491 log.go:172] (0xc0007f82c0) (0xc0006c2640) Stream removed, broadcasting: 1\nI0608 11:38:18.011419 1491 log.go:172] (0xc0007f82c0) Go away received\nI0608 11:38:18.011678 1491 log.go:172] (0xc0007f82c0) (0xc0006c2640) Stream removed, broadcasting: 1\nI0608 11:38:18.011702 1491 log.go:172] (0xc0007f82c0) (0xc00065ac80) Stream removed, broadcasting: 3\nI0608 11:38:18.011715 1491 log.go:172] (0xc0007f82c0) (0xc0006c26e0) Stream removed, broadcasting: 5\n" Jun 8 11:38:18.014: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 8 11:38:18.014: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 8 11:38:18.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 8 11:38:28.137: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 8 11:38:28.137: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 8 11:38:28.138: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 8 11:38:28.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 11:38:28.404: INFO: stderr: "I0608 11:38:28.326791 1513 log.go:172] (0xc00015e840) (0xc000752640) Create stream\nI0608 11:38:28.326844 1513 log.go:172] (0xc00015e840) (0xc000752640) Stream added, broadcasting: 1\nI0608 11:38:28.328634 1513 log.go:172] (0xc00015e840) Reply frame received for 1\nI0608 11:38:28.328664 1513 log.go:172] (0xc00015e840) (0xc000606e60) Create stream\nI0608 11:38:28.328673 1513 log.go:172] (0xc00015e840) (0xc000606e60) Stream added, broadcasting: 3\nI0608 11:38:28.329521 1513 log.go:172] (0xc00015e840) Reply frame received for 3\nI0608 11:38:28.329546 1513 log.go:172] (0xc00015e840) (0xc0005c8000) Create stream\nI0608 11:38:28.329555 1513 log.go:172] (0xc00015e840) (0xc0005c8000) Stream added, broadcasting: 5\nI0608 11:38:28.330421 1513 log.go:172] (0xc00015e840) Reply frame received for 5\nI0608 11:38:28.398170 1513 log.go:172] (0xc00015e840) Data frame received for 5\nI0608 11:38:28.398204 1513 log.go:172] (0xc0005c8000) (5) Data frame handling\nI0608 11:38:28.398229 1513 log.go:172] (0xc00015e840) Data frame received for 3\nI0608 11:38:28.398234 1513 log.go:172] (0xc000606e60) (3) Data frame handling\nI0608 11:38:28.398240 1513 log.go:172] (0xc000606e60) (3) Data frame sent\nI0608 11:38:28.398248 1513 log.go:172] (0xc00015e840) Data frame received for 3\nI0608 11:38:28.398251 1513 log.go:172] (0xc000606e60) (3) Data frame handling\nI0608 11:38:28.398875 1513 log.go:172] (0xc00015e840) Data frame received for 1\nI0608 11:38:28.398896 1513 log.go:172] (0xc000752640) (1) Data frame handling\nI0608 11:38:28.398909 1513 log.go:172] (0xc000752640) (1) Data frame sent\nI0608 11:38:28.398922 1513 log.go:172] (0xc00015e840) (0xc000752640) Stream removed, broadcasting: 1\nI0608 11:38:28.398999 1513 log.go:172] (0xc00015e840) Go away received\nI0608 11:38:28.399071 1513 log.go:172] (0xc00015e840) (0xc000752640) Stream removed, broadcasting: 1\nI0608 11:38:28.399087 1513 log.go:172] (0xc00015e840) (0xc000606e60) Stream removed, broadcasting: 3\nI0608 11:38:28.399102 1513 log.go:172] (0xc00015e840) (0xc0005c8000) Stream removed, broadcasting: 5\n" Jun 8 11:38:28.404: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 11:38:28.404: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 8 11:38:28.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 11:38:28.652: INFO: stderr: "I0608 11:38:28.532908 1535 log.go:172] (0xc0008422c0) (0xc0005d7400) Create stream\nI0608 11:38:28.532996 1535 log.go:172] (0xc0008422c0) (0xc0005d7400) Stream added, broadcasting: 1\nI0608 11:38:28.535695 1535 log.go:172] (0xc0008422c0) Reply frame received for 1\nI0608 11:38:28.535758 1535 log.go:172] (0xc0008422c0) (0xc000724000) Create stream\nI0608 11:38:28.535777 1535 log.go:172] (0xc0008422c0) (0xc000724000) Stream added, broadcasting: 3\nI0608 11:38:28.536824 1535 log.go:172] (0xc0008422c0) Reply frame received for 3\nI0608 11:38:28.536875 1535 log.go:172] (0xc0008422c0) (0xc0001f2000) Create stream\nI0608 11:38:28.536891 1535 log.go:172] (0xc0008422c0) (0xc0001f2000) Stream added, broadcasting: 5\nI0608 11:38:28.538287 1535 log.go:172] (0xc0008422c0) Reply frame received for 5\nI0608 11:38:28.642524 1535 log.go:172] (0xc0008422c0) Data frame received for 5\nI0608 11:38:28.642559 1535 log.go:172] (0xc0001f2000) (5) Data frame handling\nI0608 11:38:28.642599 1535 log.go:172] (0xc0008422c0) Data frame received for 3\nI0608 11:38:28.642638 1535 log.go:172] (0xc000724000) (3) Data frame handling\nI0608 11:38:28.642683 1535 log.go:172] (0xc000724000) (3) Data frame sent\nI0608 11:38:28.642718 1535 log.go:172] (0xc0008422c0) Data frame received for 3\nI0608 11:38:28.642737 1535 log.go:172] (0xc000724000) (3) Data frame handling\nI0608 11:38:28.644992 1535 log.go:172] (0xc0008422c0) Data frame received for 1\nI0608 11:38:28.645021 1535 log.go:172] (0xc0005d7400) (1) Data frame handling\nI0608 11:38:28.645037 1535 log.go:172] (0xc0005d7400) (1) Data frame sent\nI0608 11:38:28.645049 1535 log.go:172] (0xc0008422c0) (0xc0005d7400) Stream removed, broadcasting: 1\nI0608 11:38:28.645061 1535 log.go:172] (0xc0008422c0) Go away received\nI0608 11:38:28.645710 1535 log.go:172] (0xc0008422c0) (0xc0005d7400) Stream removed, broadcasting: 1\nI0608 11:38:28.645753 1535 log.go:172] (0xc0008422c0) (0xc000724000) Stream removed, broadcasting: 3\nI0608 11:38:28.645768 1535 log.go:172] (0xc0008422c0) (0xc0001f2000) Stream removed, broadcasting: 5\n" Jun 8 11:38:28.652: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 11:38:28.652: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 8 11:38:28.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 11:38:28.913: INFO: stderr: "I0608 11:38:28.780200 1557 log.go:172] (0xc00015c840) (0xc00079c640) Create stream\nI0608 11:38:28.780295 1557 log.go:172] (0xc00015c840) (0xc00079c640) Stream added, broadcasting: 1\nI0608 11:38:28.783139 1557 log.go:172] (0xc00015c840) Reply frame received for 1\nI0608 11:38:28.783213 1557 log.go:172] (0xc00015c840) (0xc00069ad20) Create stream\nI0608 11:38:28.783249 1557 log.go:172] (0xc00015c840) (0xc00069ad20) Stream added, broadcasting: 3\nI0608 11:38:28.784149 1557 log.go:172] (0xc00015c840) Reply frame received for 3\nI0608 11:38:28.784176 1557 log.go:172] (0xc00015c840) (0xc00079c6e0) Create stream\nI0608 11:38:28.784184 1557 log.go:172] (0xc00015c840) (0xc00079c6e0) Stream added, broadcasting: 5\nI0608 11:38:28.785299 1557 log.go:172] (0xc00015c840) Reply frame received for 5\nI0608 11:38:28.904116 1557 log.go:172] (0xc00015c840) Data frame received for 3\nI0608 11:38:28.904154 1557 log.go:172] (0xc00069ad20) (3) Data frame handling\nI0608 11:38:28.904171 1557 log.go:172] (0xc00069ad20) (3) Data frame sent\nI0608 11:38:28.904583 1557 log.go:172] (0xc00015c840) Data frame received for 3\nI0608 11:38:28.904714 1557 log.go:172] (0xc00015c840) Data frame received for 5\nI0608 11:38:28.904767 1557 log.go:172] (0xc00079c6e0) (5) Data frame handling\nI0608 11:38:28.904812 1557 log.go:172] (0xc00069ad20) (3) Data frame handling\nI0608 11:38:28.906507 1557 log.go:172] (0xc00015c840) Data frame received for 1\nI0608 11:38:28.906534 1557 log.go:172] (0xc00079c640) (1) Data frame handling\nI0608 11:38:28.906551 1557 log.go:172] (0xc00079c640) (1) Data frame sent\nI0608 11:38:28.906576 1557 log.go:172] (0xc00015c840) (0xc00079c640) Stream removed, broadcasting: 1\nI0608 11:38:28.906610 1557 log.go:172] (0xc00015c840) Go away received\nI0608 11:38:28.906951 1557 log.go:172] (0xc00015c840) (0xc00079c640) Stream removed, broadcasting: 1\nI0608 11:38:28.906980 1557 log.go:172] (0xc00015c840) (0xc00069ad20) Stream removed, broadcasting: 3\nI0608 11:38:28.906995 1557 log.go:172] (0xc00015c840) (0xc00079c6e0) Stream removed, broadcasting: 5\n" Jun 8 11:38:28.913: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 11:38:28.913: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 8 11:38:28.913: INFO: Waiting for statefulset status.replicas updated to 0 Jun 8 11:38:28.916: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 8 11:38:38.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 8 11:38:38.984: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 8 11:38:38.984: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 8 11:38:39.000: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:39.000: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:39.000: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:06 +0000 UTC }] Jun 8 11:38:39.000: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC }] Jun 8 11:38:39.000: INFO: Jun 8 11:38:39.000: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 8 11:38:40.005: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:40.005: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:40.005: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:06 +0000 UTC }] Jun 8 11:38:40.005: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC }] Jun 8 11:38:40.005: INFO: Jun 8 11:38:40.005: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 8 11:38:41.010: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:41.010: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:41.010: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:06 +0000 UTC }] Jun 8 11:38:41.010: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC }] Jun 8 11:38:41.010: INFO: Jun 8 11:38:41.010: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 8 11:38:42.015: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:42.015: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:42.015: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:06 +0000 UTC }] Jun 8 11:38:42.015: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:07 +0000 UTC }] Jun 8 11:38:42.015: INFO: Jun 8 11:38:42.015: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 8 11:38:43.019: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:43.019: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:43.020: INFO: Jun 8 11:38:43.020: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 8 11:38:44.024: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:44.024: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:44.024: INFO: Jun 8 11:38:44.024: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 8 11:38:45.028: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:45.028: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:45.028: INFO: Jun 8 11:38:45.028: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 8 11:38:46.033: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:46.033: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:46.033: INFO: Jun 8 11:38:46.033: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 8 11:38:47.038: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:47.038: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:47.038: INFO: Jun 8 11:38:47.038: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 8 11:38:48.042: INFO: POD NODE PHASE GRACE CONDITIONS Jun 8 11:38:48.042: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:38:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:37:46 +0000 UTC }] Jun 8 11:38:48.042: INFO: Jun 8 11:38:48.042: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-79mxs Jun 8 11:38:49.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:38:49.176: INFO: rc: 1 Jun 8 11:38:49.176: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001b62330 exit status 1 true [0xc001962cf8 0xc001962d10 0xc001962d28] [0xc001962cf8 0xc001962d10 0xc001962d28] [0xc001962d08 0xc001962d20] [0x935700 0x935700] 0xc001ec2c00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jun 8 11:38:59.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:38:59.264: INFO: rc: 1 Jun 8 11:38:59.264: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d26bd0 exit status 1 true [0xc0016f86c0 0xc0016f86d8 0xc0016f86f0] [0xc0016f86c0 0xc0016f86d8 0xc0016f86f0] [0xc0016f86d0 0xc0016f86e8] [0x935700 0x935700] 0xc001d83b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:39:09.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:39:09.359: INFO: rc: 1 Jun 8 11:39:09.359: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d2120 exit status 1 true [0xc00000e010 0xc000438d60 0xc000438da0] [0xc00000e010 0xc000438d60 0xc000438da0] [0xc000438ca8 0xc000438d78] [0x935700 0x935700] 0xc0022b3680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:39:19.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:39:19.452: INFO: rc: 1 Jun 8 11:39:19.452: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d22d0 exit status 1 true [0xc000438e70 0xc000438f50 0xc000439080] [0xc000438e70 0xc000438f50 0xc000439080] [0xc000438f40 0xc000439070] [0x935700 0x935700] 0xc0022b3980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:39:29.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:39:29.545: INFO: rc: 1 Jun 8 11:39:29.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023f0120 exit status 1 true [0xc0000e80f0 0xc0000e8298 0xc00070c040] [0xc0000e80f0 0xc0000e8298 0xc00070c040] [0xc0000e8278 0xc00070c020] [0x935700 0x935700] 0xc001e642a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:39:39.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:39:39.643: INFO: rc: 1 Jun 8 11:39:39.643: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023f0270 exit status 1 true [0xc00070c058 0xc00070c078 0xc00070c0c0] [0xc00070c058 0xc00070c078 0xc00070c0c0] [0xc00070c068 0xc00070c0b0] [0x935700 0x935700] 0xc001e64600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:39:49.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:39:49.734: INFO: rc: 1 Jun 8 11:39:49.734: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002334120 exit status 1 true [0xc0018dc010 0xc0018dc028 0xc0018dc060] [0xc0018dc010 0xc0018dc028 0xc0018dc060] [0xc0018dc020 0xc0018dc058] [0x935700 0x935700] 0xc001c782a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:39:59.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:39:59.834: INFO: rc: 1 Jun 8 11:39:59.834: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002334240 exit status 1 true [0xc0018dc068 0xc0018dc0a8 0xc0018dc0c0] [0xc0018dc068 0xc0018dc0a8 0xc0018dc0c0] [0xc0018dc090 0xc0018dc0b8] [0x935700 0x935700] 0xc001c78600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:40:09.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:40:09.931: INFO: rc: 1 Jun 8 11:40:09.931: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d2450 exit status 1 true [0xc000439088 0xc000439160 0xc0004392a0] [0xc000439088 0xc000439160 0xc0004392a0] [0xc0004390c0 0xc000439278] [0x935700 0x935700] 0xc0022b3c20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:40:19.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:40:20.031: INFO: rc: 1 Jun 8 11:40:20.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d25a0 exit status 1 true [0xc000439318 0xc000439400 0xc000439468] [0xc000439318 0xc000439400 0xc000439468] [0xc000439370 0xc000439430] [0x935700 0x935700] 0xc0022b3f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:40:30.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:40:30.128: INFO: rc: 1 Jun 8 11:40:30.128: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002334360 exit status 1 true [0xc0018dc0d0 0xc0018dc128 0xc0018dc168] [0xc0018dc0d0 0xc0018dc128 0xc0018dc168] [0xc0018dc108 0xc0018dc150] [0x935700 0x935700] 0xc001c789c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:40:40.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:40:40.223: INFO: rc: 1 Jun 8 11:40:40.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d26c0 exit status 1 true [0xc0004394d8 0xc0004395d0 0xc000439650] [0xc0004394d8 0xc0004395d0 0xc000439650] [0xc000439558 0xc000439618] [0x935700 0x935700] 0xc001c8e300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:40:50.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:40:50.312: INFO: rc: 1 Jun 8 11:40:50.312: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d27e0 exit status 1 true [0xc000439658 0xc000439790 0xc000439960] [0xc000439658 0xc000439790 0xc000439960] [0xc000439700 0xc000439918] [0x935700 0x935700] 0xc001c8e5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:41:00.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:41:00.403: INFO: rc: 1 Jun 8 11:41:00.403: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d2960 exit status 1 true [0xc0004399e8 0xc000439a18 0xc000439a40] [0xc0004399e8 0xc000439a18 0xc000439a40] [0xc000439a10 0xc000439a38] [0x935700 0x935700] 0xc001c8e840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:41:10.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:41:10.497: INFO: rc: 1 Jun 8 11:41:10.497: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002604120 exit status 1 true [0xc0000e80f0 0xc0000e8298 0xc000438ba8] [0xc0000e80f0 0xc0000e8298 0xc000438ba8] [0xc0000e8278 0xc00000e010] [0x935700 0x935700] 0xc0022b3680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:41:20.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:41:20.577: INFO: rc: 1 Jun 8 11:41:20.578: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d2180 exit status 1 true [0xc0018dc010 0xc0018dc028 0xc0018dc060] [0xc0018dc010 0xc0018dc028 0xc0018dc060] [0xc0018dc020 0xc0018dc058] [0x935700 0x935700] 0xc001c8e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:41:30.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:41:30.666: INFO: rc: 1 Jun 8 11:41:30.667: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d2330 exit status 1 true [0xc0018dc068 0xc0018dc0a8 0xc0018dc0c0] [0xc0018dc068 0xc0018dc0a8 0xc0018dc0c0] [0xc0018dc090 0xc0018dc0b8] [0x935700 0x935700] 0xc001c8e540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:41:40.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:41:40.755: INFO: rc: 1 Jun 8 11:41:40.755: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d2480 exit status 1 true [0xc0018dc0d0 0xc0018dc128 0xc0018dc168] [0xc0018dc0d0 0xc0018dc128 0xc0018dc168] [0xc0018dc108 0xc0018dc150] [0x935700 0x935700] 0xc001c8ea80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:41:50.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:41:50.861: INFO: rc: 1 Jun 8 11:41:50.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d2600 exit status 1 true [0xc0018dc180 0xc0018dc1c0 0xc0018dc1d8] [0xc0018dc180 0xc0018dc1c0 0xc0018dc1d8] [0xc0018dc1b8 0xc0018dc1d0] [0x935700 0x935700] 0xc001c8ed20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:42:00.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:42:00.960: INFO: rc: 1 Jun 8 11:42:00.960: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d2780 exit status 1 true [0xc0018dc1e0 0xc0018dc230 0xc0018dc280] [0xc0018dc1e0 0xc0018dc230 0xc0018dc280] [0xc0018dc210 0xc0018dc268] [0x935700 0x935700] 0xc001c8f020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:42:10.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:42:11.051: INFO: rc: 1 Jun 8 11:42:11.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023f0180 exit status 1 true [0xc00070c020 0xc00070c060 0xc00070c098] [0xc00070c020 0xc00070c060 0xc00070c098] [0xc00070c058 0xc00070c078] [0x935700 0x935700] 0xc001c782a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:42:21.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:42:21.136: INFO: rc: 1 Jun 8 11:42:21.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018d2a20 exit status 1 true [0xc0018dc2a0 0xc0018dc2f0 0xc0018dc340] [0xc0018dc2a0 0xc0018dc2f0 0xc0018dc340] [0xc0018dc2d0 0xc0018dc320] [0x935700 0x935700] 0xc001c8f920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:42:31.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:42:31.230: INFO: rc: 1 Jun 8 11:42:31.230: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023f0300 exit status 1 true [0xc00070c0b0 0xc00070c0d8 0xc00070c0f0] [0xc00070c0b0 0xc00070c0d8 0xc00070c0f0] [0xc00070c0c8 0xc00070c0e8] [0x935700 0x935700] 0xc001c78600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:42:41.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:42:41.330: INFO: rc: 1 Jun 8 11:42:41.331: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002334180 exit status 1 true [0xc0005820a8 0xc0005821f8 0xc000582340] [0xc0005820a8 0xc0005821f8 0xc000582340] [0xc0005821c8 0xc0005822e8] [0x935700 0x935700] 0xc001e642a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:42:51.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:42:51.440: INFO: rc: 1 Jun 8 11:42:51.440: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023f0570 exit status 1 true [0xc00070c0f8 0xc00070c118 0xc00070c168] [0xc00070c0f8 0xc00070c118 0xc00070c168] [0xc00070c108 0xc00070c150] [0x935700 0x935700] 0xc001c789c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:43:01.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:43:01.538: INFO: rc: 1 Jun 8 11:43:01.538: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002604330 exit status 1 true [0xc000438ca8 0xc000438d78 0xc000438e78] [0xc000438ca8 0xc000438d78 0xc000438e78] [0xc000438d70 0xc000438e70] [0x935700 0x935700] 0xc0022b3980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:43:11.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:43:11.692: INFO: rc: 1 Jun 8 11:43:11.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002334120 exit status 1 true [0xc0000e80f0 0xc0000e8298 0xc00070c040] [0xc0000e80f0 0xc0000e8298 0xc00070c040] [0xc0000e8278 0xc00070c020] [0x935700 0x935700] 0xc001c782a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:43:21.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:43:21.785: INFO: rc: 1 Jun 8 11:43:21.786: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002334270 exit status 1 true [0xc00070c058 0xc00070c078 0xc00070c0c0] [0xc00070c058 0xc00070c078 0xc00070c0c0] [0xc00070c068 0xc00070c0b0] [0x935700 0x935700] 0xc001c78600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:43:31.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:43:31.884: INFO: rc: 1 Jun 8 11:43:31.884: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002334390 exit status 1 true [0xc00070c0c8 0xc00070c0e8 0xc00070c100] [0xc00070c0c8 0xc00070c0e8 0xc00070c100] [0xc00070c0e0 0xc00070c0f8] [0x935700 0x935700] 0xc001c789c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:43:41.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:43:41.978: INFO: rc: 1 Jun 8 11:43:41.978: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002604150 exit status 1 true [0xc0005820a8 0xc0005821f8 0xc000582340] [0xc0005820a8 0xc0005821f8 0xc000582340] [0xc0005821c8 0xc0005822e8] [0x935700 0x935700] 0xc001e642a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 8 11:43:51.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-79mxs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 11:43:52.080: INFO: rc: 1 Jun 8 11:43:52.081: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jun 8 11:43:52.081: INFO: Scaling statefulset ss to 0 Jun 8 11:43:52.089: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 8 11:43:52.092: INFO: Deleting all statefulset in ns e2e-tests-statefulset-79mxs Jun 8 11:43:52.094: INFO: Scaling statefulset ss to 0 Jun 8 11:43:52.103: INFO: Waiting for statefulset status.replicas updated to 0 Jun 8 11:43:52.105: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:43:52.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-79mxs" for this suite. Jun 8 11:43:58.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:43:58.205: INFO: namespace: e2e-tests-statefulset-79mxs, resource: bindings, ignored listing per whitelist Jun 8 11:43:58.253: INFO: namespace e2e-tests-statefulset-79mxs deletion completed in 6.093481911s • [SLOW TEST:371.701 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:43:58.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-56f85a6e-a97d-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:43:58.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-56fa33b5-a97d-11ea-978f-0242ac110018" in namespace "e2e-tests-configmap-4j9xt" to be "success or failure" Jun 8 11:43:58.358: INFO: Pod "pod-configmaps-56fa33b5-a97d-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120169ms Jun 8 11:44:00.471: INFO: Pod "pod-configmaps-56fa33b5-a97d-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117308115s Jun 8 11:44:02.476: INFO: Pod "pod-configmaps-56fa33b5-a97d-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121693496s STEP: Saw pod success Jun 8 11:44:02.476: INFO: Pod "pod-configmaps-56fa33b5-a97d-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:44:02.478: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-56fa33b5-a97d-11ea-978f-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 8 11:44:02.515: INFO: Waiting for pod pod-configmaps-56fa33b5-a97d-11ea-978f-0242ac110018 to disappear Jun 8 11:44:02.548: INFO: Pod pod-configmaps-56fa33b5-a97d-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:44:02.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4j9xt" for this suite. Jun 8 11:44:08.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:44:08.598: INFO: namespace: e2e-tests-configmap-4j9xt, resource: bindings, ignored listing per whitelist Jun 8 11:44:08.691: INFO: namespace e2e-tests-configmap-4j9xt deletion completed in 6.137837574s • [SLOW TEST:10.437 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:44:08.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-957kk STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 8 11:44:08.826: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 8 11:44:28.943: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.223:8080/dial?request=hostName&protocol=http&host=10.244.2.124&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-957kk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 11:44:28.943: INFO: >>> kubeConfig: /root/.kube/config I0608 11:44:28.982720 6 log.go:172] (0xc0012362c0) (0xc00144dea0) Create stream I0608 11:44:28.982756 6 log.go:172] (0xc0012362c0) (0xc00144dea0) Stream added, broadcasting: 1 I0608 11:44:28.986844 6 log.go:172] (0xc0012362c0) Reply frame received for 1 I0608 11:44:28.986888 6 log.go:172] (0xc0012362c0) (0xc001bb4dc0) Create stream I0608 11:44:28.986901 6 log.go:172] (0xc0012362c0) (0xc001bb4dc0) Stream added, broadcasting: 3 I0608 11:44:28.987985 6 log.go:172] (0xc0012362c0) Reply frame received for 3 I0608 11:44:28.988026 6 log.go:172] (0xc0012362c0) (0xc001ac59a0) Create stream I0608 11:44:28.988042 6 log.go:172] (0xc0012362c0) (0xc001ac59a0) Stream added, broadcasting: 5 I0608 11:44:28.989028 6 log.go:172] (0xc0012362c0) Reply frame received for 5 I0608 11:44:29.049724 6 log.go:172] (0xc0012362c0) Data frame received for 3 I0608 11:44:29.049748 6 log.go:172] (0xc001bb4dc0) (3) Data frame handling I0608 11:44:29.049764 6 log.go:172] (0xc001bb4dc0) (3) Data frame sent I0608 11:44:29.050288 6 log.go:172] (0xc0012362c0) Data frame received for 5 I0608 11:44:29.050310 6 log.go:172] (0xc001ac59a0) (5) Data frame handling I0608 11:44:29.050404 6 log.go:172] (0xc0012362c0) Data frame received for 3 I0608 11:44:29.050421 6 log.go:172] (0xc001bb4dc0) (3) Data frame handling I0608 11:44:29.052760 6 log.go:172] (0xc0012362c0) Data frame received for 1 I0608 11:44:29.052787 6 log.go:172] (0xc00144dea0) (1) Data frame handling I0608 11:44:29.052808 6 log.go:172] (0xc00144dea0) (1) Data frame sent I0608 11:44:29.052825 6 log.go:172] (0xc0012362c0) (0xc00144dea0) Stream removed, broadcasting: 1 I0608 11:44:29.052841 6 log.go:172] (0xc0012362c0) Go away received I0608 11:44:29.052991 6 log.go:172] (0xc0012362c0) (0xc00144dea0) Stream removed, broadcasting: 1 I0608 11:44:29.053015 6 log.go:172] (0xc0012362c0) (0xc001bb4dc0) Stream removed, broadcasting: 3 I0608 11:44:29.053028 6 log.go:172] (0xc0012362c0) (0xc001ac59a0) Stream removed, broadcasting: 5 Jun 8 11:44:29.053: INFO: Waiting for endpoints: map[] Jun 8 11:44:29.056: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.223:8080/dial?request=hostName&protocol=http&host=10.244.1.222&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-957kk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 11:44:29.056: INFO: >>> kubeConfig: /root/.kube/config I0608 11:44:29.089689 6 log.go:172] (0xc001236790) (0xc001246280) Create stream I0608 11:44:29.089723 6 log.go:172] (0xc001236790) (0xc001246280) Stream added, broadcasting: 1 I0608 11:44:29.099305 6 log.go:172] (0xc001236790) Reply frame received for 1 I0608 11:44:29.099362 6 log.go:172] (0xc001236790) (0xc0012463c0) Create stream I0608 11:44:29.099377 6 log.go:172] (0xc001236790) (0xc0012463c0) Stream added, broadcasting: 3 I0608 11:44:29.100946 6 log.go:172] (0xc001236790) Reply frame received for 3 I0608 11:44:29.100975 6 log.go:172] (0xc001236790) (0xc000f0f4a0) Create stream I0608 11:44:29.100990 6 log.go:172] (0xc001236790) (0xc000f0f4a0) Stream added, broadcasting: 5 I0608 11:44:29.102568 6 log.go:172] (0xc001236790) Reply frame received for 5 I0608 11:44:29.173515 6 log.go:172] (0xc001236790) Data frame received for 3 I0608 11:44:29.173566 6 log.go:172] (0xc0012463c0) (3) Data frame handling I0608 11:44:29.173590 6 log.go:172] (0xc0012463c0) (3) Data frame sent I0608 11:44:29.173825 6 log.go:172] (0xc001236790) Data frame received for 3 I0608 11:44:29.173844 6 log.go:172] (0xc0012463c0) (3) Data frame handling I0608 11:44:29.173934 6 log.go:172] (0xc001236790) Data frame received for 5 I0608 11:44:29.173951 6 log.go:172] (0xc000f0f4a0) (5) Data frame handling I0608 11:44:29.175593 6 log.go:172] (0xc001236790) Data frame received for 1 I0608 11:44:29.175612 6 log.go:172] (0xc001246280) (1) Data frame handling I0608 11:44:29.175622 6 log.go:172] (0xc001246280) (1) Data frame sent I0608 11:44:29.175633 6 log.go:172] (0xc001236790) (0xc001246280) Stream removed, broadcasting: 1 I0608 11:44:29.175645 6 log.go:172] (0xc001236790) Go away received I0608 11:44:29.175809 6 log.go:172] (0xc001236790) (0xc001246280) Stream removed, broadcasting: 1 I0608 11:44:29.175861 6 log.go:172] (0xc001236790) (0xc0012463c0) Stream removed, broadcasting: 3 I0608 11:44:29.175912 6 log.go:172] (0xc001236790) (0xc000f0f4a0) Stream removed, broadcasting: 5 Jun 8 11:44:29.175: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:44:29.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-957kk" for this suite. Jun 8 11:44:53.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:44:53.275: INFO: namespace: e2e-tests-pod-network-test-957kk, resource: bindings, ignored listing per whitelist Jun 8 11:44:53.279: INFO: namespace e2e-tests-pod-network-test-957kk deletion completed in 24.099149571s • [SLOW TEST:44.588 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:44:53.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-77cd9939-a97d-11ea-978f-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-77cd9939-a97d-11ea-978f-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:46:21.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6k85l" for this suite. Jun 8 11:46:43.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:46:43.947: INFO: namespace: e2e-tests-projected-6k85l, resource: bindings, ignored listing per whitelist Jun 8 11:46:43.984: INFO: namespace e2e-tests-projected-6k85l deletion completed in 22.116918685s • [SLOW TEST:110.705 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:46:43.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jun 8 11:46:44.153: INFO: Waiting up to 5m0s for pod "pod-b9cd7409-a97d-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-6mt4q" to be "success or failure" Jun 8 11:46:44.164: INFO: Pod "pod-b9cd7409-a97d-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.822839ms Jun 8 11:46:46.168: INFO: Pod "pod-b9cd7409-a97d-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014875777s Jun 8 11:46:48.173: INFO: Pod "pod-b9cd7409-a97d-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019649871s STEP: Saw pod success Jun 8 11:46:48.173: INFO: Pod "pod-b9cd7409-a97d-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:46:48.176: INFO: Trying to get logs from node hunter-worker2 pod pod-b9cd7409-a97d-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:46:48.195: INFO: Waiting for pod pod-b9cd7409-a97d-11ea-978f-0242ac110018 to disappear Jun 8 11:46:48.200: INFO: Pod pod-b9cd7409-a97d-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:46:48.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6mt4q" for this suite. Jun 8 11:46:54.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:46:54.282: INFO: namespace: e2e-tests-emptydir-6mt4q, resource: bindings, ignored listing per whitelist Jun 8 11:46:54.294: INFO: namespace e2e-tests-emptydir-6mt4q deletion completed in 6.090023858s • [SLOW TEST:10.310 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:46:54.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jun 8 11:46:54.380: INFO: Waiting up to 5m0s for pod "client-containers-bfe49eb8-a97d-11ea-978f-0242ac110018" in namespace "e2e-tests-containers-sfcn2" to be "success or failure" Jun 8 11:46:54.398: INFO: Pod "client-containers-bfe49eb8-a97d-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.823899ms Jun 8 11:46:56.402: INFO: Pod "client-containers-bfe49eb8-a97d-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021988023s Jun 8 11:46:58.406: INFO: Pod "client-containers-bfe49eb8-a97d-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026416186s STEP: Saw pod success Jun 8 11:46:58.406: INFO: Pod "client-containers-bfe49eb8-a97d-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:46:58.409: INFO: Trying to get logs from node hunter-worker pod client-containers-bfe49eb8-a97d-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:46:58.601: INFO: Waiting for pod client-containers-bfe49eb8-a97d-11ea-978f-0242ac110018 to disappear Jun 8 11:46:58.625: INFO: Pod client-containers-bfe49eb8-a97d-11ea-978f-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:46:58.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-sfcn2" for this suite. Jun 8 11:47:04.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:47:04.681: INFO: namespace: e2e-tests-containers-sfcn2, resource: bindings, ignored listing per whitelist Jun 8 11:47:04.739: INFO: namespace e2e-tests-containers-sfcn2 deletion completed in 6.109094961s • [SLOW TEST:10.445 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:47:04.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:47:04.813: INFO: Creating deployment "test-recreate-deployment" Jun 8 11:47:04.822: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 8 11:47:04.831: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jun 8 11:47:06.838: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 8 11:47:06.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213624, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213624, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213624, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213624, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 8 11:47:08.844: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 8 11:47:08.850: INFO: Updating deployment test-recreate-deployment Jun 8 11:47:08.850: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 8 11:47:09.549: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-x44t5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x44t5/deployments/test-recreate-deployment,UID:c61f0195-a97d-11ea-99e8-0242ac110002,ResourceVersion:14866345,Generation:2,CreationTimestamp:2020-06-08 11:47:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-08 11:47:09 +0000 UTC 2020-06-08 11:47:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-08 11:47:09 +0000 UTC 2020-06-08 11:47:04 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 8 11:47:09.553: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-x44t5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x44t5/replicasets/test-recreate-deployment-589c4bfd,UID:c89b9ff3-a97d-11ea-99e8-0242ac110002,ResourceVersion:14866339,Generation:1,CreationTimestamp:2020-06-08 11:47:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c61f0195-a97d-11ea-99e8-0242ac110002 0xc001fb207f 0xc001fb2090}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 8 11:47:09.553: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 8 11:47:09.553: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-x44t5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x44t5/replicasets/test-recreate-deployment-5bf7f65dc,UID:c6217f18-a97d-11ea-99e8-0242ac110002,ResourceVersion:14866331,Generation:2,CreationTimestamp:2020-06-08 11:47:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c61f0195-a97d-11ea-99e8-0242ac110002 0xc001fb2150 0xc001fb2151}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 8 11:47:09.556: INFO: Pod "test-recreate-deployment-589c4bfd-7bxhp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-7bxhp,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-x44t5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x44t5/pods/test-recreate-deployment-589c4bfd-7bxhp,UID:c89f4e88-a97d-11ea-99e8-0242ac110002,ResourceVersion:14866346,Generation:0,CreationTimestamp:2020-06-08 11:47:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd c89b9ff3-a97d-11ea-99e8-0242ac110002 0xc001f0922f 0xc001f09240}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7swqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7swqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7swqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f092b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f092d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:47:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:47:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:47:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:47:09 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-08 11:47:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:47:09.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-x44t5" for this suite. Jun 8 11:47:17.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:47:17.642: INFO: namespace: e2e-tests-deployment-x44t5, resource: bindings, ignored listing per whitelist Jun 8 11:47:17.660: INFO: namespace e2e-tests-deployment-x44t5 deletion completed in 8.099953873s • [SLOW TEST:12.921 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:47:17.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 8 11:47:17.795: INFO: Waiting up to 5m0s for pod "pod-cdd90675-a97d-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-jfs7k" to be "success or failure" Jun 8 11:47:17.798: INFO: Pod "pod-cdd90675-a97d-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.959644ms Jun 8 11:47:19.815: INFO: Pod "pod-cdd90675-a97d-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019366932s Jun 8 11:47:21.819: INFO: Pod "pod-cdd90675-a97d-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023296667s STEP: Saw pod success Jun 8 11:47:21.819: INFO: Pod "pod-cdd90675-a97d-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:47:21.822: INFO: Trying to get logs from node hunter-worker2 pod pod-cdd90675-a97d-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:47:21.879: INFO: Waiting for pod pod-cdd90675-a97d-11ea-978f-0242ac110018 to disappear Jun 8 11:47:21.910: INFO: Pod pod-cdd90675-a97d-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:47:21.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jfs7k" for this suite. Jun 8 11:47:27.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:47:27.994: INFO: namespace: e2e-tests-emptydir-jfs7k, resource: bindings, ignored listing per whitelist Jun 8 11:47:27.998: INFO: namespace e2e-tests-emptydir-jfs7k deletion completed in 6.083849248s • [SLOW TEST:10.338 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:47:27.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 8 11:47:28.063: INFO: PodSpec: initContainers in spec.initContainers Jun 8 11:48:18.436: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d3fa806b-a97d-11ea-978f-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-pn48v", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-pn48v/pods/pod-init-d3fa806b-a97d-11ea-978f-0242ac110018", UID:"d3fb31c3-a97d-11ea-99e8-0242ac110002", ResourceVersion:"14866557", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727213648, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"63709532"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-b99zm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00235d300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b99zm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b99zm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b99zm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002228f68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002120cc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002228ff0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002229010)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002229018), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00222901c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213648, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213648, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213648, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213648, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.226", StartTime:(*v1.Time)(0xc001a3eda0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001a5c930)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001a5c9a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f6ec830d87350efebe31fce19dea04a888b0376820bbda037ec14f272f553095"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a3ede0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a3edc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:48:18.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-pn48v" for this suite. Jun 8 11:48:40.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:48:40.522: INFO: namespace: e2e-tests-init-container-pn48v, resource: bindings, ignored listing per whitelist Jun 8 11:48:40.591: INFO: namespace e2e-tests-init-container-pn48v deletion completed in 22.111818542s • [SLOW TEST:72.593 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:48:40.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-ff4bf4aa-a97d-11ea-978f-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-ff4bf514-a97d-11ea-978f-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ff4bf4aa-a97d-11ea-978f-0242ac110018 STEP: Updating configmap cm-test-opt-upd-ff4bf514-a97d-11ea-978f-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-ff4bf541-a97d-11ea-978f-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:48:51.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-czzwm" for this suite. Jun 8 11:49:13.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:49:13.312: INFO: namespace: e2e-tests-projected-czzwm, resource: bindings, ignored listing per whitelist Jun 8 11:49:13.350: INFO: namespace e2e-tests-projected-czzwm deletion completed in 22.157938577s • [SLOW TEST:32.758 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:49:13.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:49:19.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-qstvr" for this suite. Jun 8 11:49:59.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:49:59.718: INFO: namespace: e2e-tests-kubelet-test-qstvr, resource: bindings, ignored listing per whitelist Jun 8 11:49:59.768: INFO: namespace e2e-tests-kubelet-test-qstvr deletion completed in 40.178042665s • [SLOW TEST:46.418 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:49:59.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jun 8 11:49:59.914: INFO: Waiting up to 5m0s for pod "client-containers-2e7a0fbc-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-containers-58btk" to be "success or failure" Jun 8 11:49:59.917: INFO: Pod "client-containers-2e7a0fbc-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.65741ms Jun 8 11:50:01.922: INFO: Pod "client-containers-2e7a0fbc-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00798336s Jun 8 11:50:04.111: INFO: Pod "client-containers-2e7a0fbc-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.197226858s STEP: Saw pod success Jun 8 11:50:04.111: INFO: Pod "client-containers-2e7a0fbc-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:50:04.114: INFO: Trying to get logs from node hunter-worker pod client-containers-2e7a0fbc-a97e-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:50:04.146: INFO: Waiting for pod client-containers-2e7a0fbc-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:50:04.248: INFO: Pod client-containers-2e7a0fbc-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:50:04.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-58btk" for this suite. Jun 8 11:50:10.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:50:10.295: INFO: namespace: e2e-tests-containers-58btk, resource: bindings, ignored listing per whitelist Jun 8 11:50:10.343: INFO: namespace e2e-tests-containers-58btk deletion completed in 6.091394342s • [SLOW TEST:10.574 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:50:10.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jun 8 11:50:14.525: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:50:32.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-p7x8p" for this suite. Jun 8 11:50:38.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:50:38.721: INFO: namespace: e2e-tests-namespaces-p7x8p, resource: bindings, ignored listing per whitelist Jun 8 11:50:38.752: INFO: namespace e2e-tests-namespaces-p7x8p deletion completed in 6.103952906s STEP: Destroying namespace "e2e-tests-nsdeletetest-5gtz8" for this suite. Jun 8 11:50:38.755: INFO: Namespace e2e-tests-nsdeletetest-5gtz8 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-9v7cs" for this suite. Jun 8 11:50:44.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:50:44.845: INFO: namespace: e2e-tests-nsdeletetest-9v7cs, resource: bindings, ignored listing per whitelist Jun 8 11:50:44.847: INFO: namespace e2e-tests-nsdeletetest-9v7cs deletion completed in 6.09148961s • [SLOW TEST:34.503 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:50:44.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 8 11:50:44.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-ddx42' Jun 8 11:50:47.686: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 8 11:50:47.686: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jun 8 11:50:51.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-ddx42' Jun 8 11:50:52.072: INFO: stderr: "" Jun 8 11:50:52.072: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:50:52.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ddx42" for this suite. Jun 8 11:51:16.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:51:16.118: INFO: namespace: e2e-tests-kubectl-ddx42, resource: bindings, ignored listing per whitelist Jun 8 11:51:16.160: INFO: namespace e2e-tests-kubectl-ddx42 deletion completed in 24.081686045s • [SLOW TEST:31.314 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:51:16.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:51:16.274: INFO: Creating ReplicaSet my-hostname-basic-5c00bf5b-a97e-11ea-978f-0242ac110018 Jun 8 11:51:16.291: INFO: Pod name my-hostname-basic-5c00bf5b-a97e-11ea-978f-0242ac110018: Found 0 pods out of 1 Jun 8 11:51:21.295: INFO: Pod name my-hostname-basic-5c00bf5b-a97e-11ea-978f-0242ac110018: Found 1 pods out of 1 Jun 8 11:51:21.295: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5c00bf5b-a97e-11ea-978f-0242ac110018" is running Jun 8 11:51:21.297: INFO: Pod "my-hostname-basic-5c00bf5b-a97e-11ea-978f-0242ac110018-hdkvm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-08 11:51:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-08 11:51:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-08 11:51:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-08 11:51:16 +0000 UTC Reason: Message:}]) Jun 8 11:51:21.297: INFO: Trying to dial the pod Jun 8 11:51:26.308: INFO: Controller my-hostname-basic-5c00bf5b-a97e-11ea-978f-0242ac110018: Got expected result from replica 1 [my-hostname-basic-5c00bf5b-a97e-11ea-978f-0242ac110018-hdkvm]: "my-hostname-basic-5c00bf5b-a97e-11ea-978f-0242ac110018-hdkvm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:51:26.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-9mrnn" for this suite. Jun 8 11:51:32.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:51:32.553: INFO: namespace: e2e-tests-replicaset-9mrnn, resource: bindings, ignored listing per whitelist Jun 8 11:51:32.578: INFO: namespace e2e-tests-replicaset-9mrnn deletion completed in 6.266184935s • [SLOW TEST:16.417 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:51:32.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-65c71e95-a97e-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:51:32.700: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-vdmtf" to be "success or failure" Jun 8 11:51:32.722: INFO: Pod "pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.743715ms Jun 8 11:51:35.099: INFO: Pod "pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398571932s Jun 8 11:51:37.103: INFO: Pod "pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402930896s Jun 8 11:51:39.107: INFO: Pod "pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.406894427s Jun 8 11:51:41.111: INFO: Pod "pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.410711504s STEP: Saw pod success Jun 8 11:51:41.111: INFO: Pod "pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:51:41.114: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 8 11:51:41.432: INFO: Waiting for pod pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:51:41.483: INFO: Pod pod-projected-configmaps-65c7c6e1-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:51:41.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vdmtf" for this suite. Jun 8 11:51:47.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:51:47.597: INFO: namespace: e2e-tests-projected-vdmtf, resource: bindings, ignored listing per whitelist Jun 8 11:51:47.642: INFO: namespace e2e-tests-projected-vdmtf deletion completed in 6.154923781s • [SLOW TEST:15.064 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:51:47.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-6ec51bb2-a97e-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:51:47.773: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ec59ef7-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-rrd2l" to be "success or failure" Jun 8 11:51:47.777: INFO: Pod "pod-projected-secrets-6ec59ef7-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.616587ms Jun 8 11:51:49.780: INFO: Pod "pod-projected-secrets-6ec59ef7-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006808295s Jun 8 11:51:51.784: INFO: Pod "pod-projected-secrets-6ec59ef7-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011002986s STEP: Saw pod success Jun 8 11:51:51.784: INFO: Pod "pod-projected-secrets-6ec59ef7-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:51:51.787: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-6ec59ef7-a97e-11ea-978f-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 8 11:51:51.805: INFO: Waiting for pod pod-projected-secrets-6ec59ef7-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:51:51.810: INFO: Pod pod-projected-secrets-6ec59ef7-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:51:51.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rrd2l" for this suite. Jun 8 11:51:57.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:51:57.892: INFO: namespace: e2e-tests-projected-rrd2l, resource: bindings, ignored listing per whitelist Jun 8 11:51:58.009: INFO: namespace e2e-tests-projected-rrd2l deletion completed in 6.195978508s • [SLOW TEST:10.367 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:51:58.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:51:58.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74f15638-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-l52dv" to be "success or failure" Jun 8 11:51:58.134: INFO: Pod "downwardapi-volume-74f15638-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.666322ms Jun 8 11:52:00.138: INFO: Pod "downwardapi-volume-74f15638-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008072469s Jun 8 11:52:02.141: INFO: Pod "downwardapi-volume-74f15638-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010862111s STEP: Saw pod success Jun 8 11:52:02.141: INFO: Pod "downwardapi-volume-74f15638-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:52:02.142: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-74f15638-a97e-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:52:02.281: INFO: Waiting for pod downwardapi-volume-74f15638-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:52:02.367: INFO: Pod downwardapi-volume-74f15638-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:52:02.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l52dv" for this suite. Jun 8 11:52:08.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:52:08.472: INFO: namespace: e2e-tests-downward-api-l52dv, resource: bindings, ignored listing per whitelist Jun 8 11:52:08.522: INFO: namespace e2e-tests-downward-api-l52dv deletion completed in 6.151245746s • [SLOW TEST:10.513 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:52:08.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-7b366f79-a97e-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:52:08.666: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b39e19b-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-tv6v4" to be "success or failure" Jun 8 11:52:08.670: INFO: Pod "pod-projected-secrets-7b39e19b-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.469448ms Jun 8 11:52:10.673: INFO: Pod "pod-projected-secrets-7b39e19b-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006418567s Jun 8 11:52:12.676: INFO: Pod "pod-projected-secrets-7b39e19b-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009408183s STEP: Saw pod success Jun 8 11:52:12.676: INFO: Pod "pod-projected-secrets-7b39e19b-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:52:12.678: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-7b39e19b-a97e-11ea-978f-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 8 11:52:12.714: INFO: Waiting for pod pod-projected-secrets-7b39e19b-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:52:12.765: INFO: Pod pod-projected-secrets-7b39e19b-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:52:12.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tv6v4" for this suite. Jun 8 11:52:18.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:52:18.848: INFO: namespace: e2e-tests-projected-tv6v4, resource: bindings, ignored listing per whitelist Jun 8 11:52:18.883: INFO: namespace e2e-tests-projected-tv6v4 deletion completed in 6.113709582s • [SLOW TEST:10.361 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:52:18.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:52:19.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81656e3d-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-qb6nm" to be "success or failure" Jun 8 11:52:19.027: INFO: Pod "downwardapi-volume-81656e3d-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.816792ms Jun 8 11:52:21.032: INFO: Pod "downwardapi-volume-81656e3d-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015388344s Jun 8 11:52:23.039: INFO: Pod "downwardapi-volume-81656e3d-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022168234s STEP: Saw pod success Jun 8 11:52:23.039: INFO: Pod "downwardapi-volume-81656e3d-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:52:23.044: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-81656e3d-a97e-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:52:23.057: INFO: Waiting for pod downwardapi-volume-81656e3d-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:52:23.062: INFO: Pod downwardapi-volume-81656e3d-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:52:23.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qb6nm" for this suite. Jun 8 11:52:29.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:52:29.089: INFO: namespace: e2e-tests-projected-qb6nm, resource: bindings, ignored listing per whitelist Jun 8 11:52:29.168: INFO: namespace e2e-tests-projected-qb6nm deletion completed in 6.103136134s • [SLOW TEST:10.285 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:52:29.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-n24h5 Jun 8 11:52:33.300: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-n24h5 STEP: checking the pod's current state and verifying that restartCount is present Jun 8 11:52:33.302: INFO: Initial restart count of pod liveness-http is 0 Jun 8 11:52:55.441: INFO: Restart count of pod e2e-tests-container-probe-n24h5/liveness-http is now 1 (22.138391285s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:52:55.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-n24h5" for this suite. Jun 8 11:53:01.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:53:01.545: INFO: namespace: e2e-tests-container-probe-n24h5, resource: bindings, ignored listing per whitelist Jun 8 11:53:01.566: INFO: namespace e2e-tests-container-probe-n24h5 deletion completed in 6.106755119s • [SLOW TEST:32.397 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:53:01.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jun 8 11:53:05.798: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-9ad60243-a97e-11ea-978f-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-w5d4v", SelfLink:"/api/v1/namespaces/e2e-tests-pods-w5d4v/pods/pod-submit-remove-9ad60243-a97e-11ea-978f-0242ac110018", UID:"9ada7779-a97e-11ea-99e8-0242ac110002", ResourceVersion:"14867505", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727213981, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"691150261"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-x76mf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d90200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-x76mf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022a3d58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011bde00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022a3dd0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022a3df0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022a3df8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022a3dfc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213981, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213985, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213985, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727213981, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.135", StartTime:(*v1.Time)(0xc00191a280), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00191a2a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://f8d8ea6545198e774dfa82396849b952f03d7211a85d7f163d2f63359f1a45fa"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:53:11.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-w5d4v" for this suite. Jun 8 11:53:17.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:53:17.845: INFO: namespace: e2e-tests-pods-w5d4v, resource: bindings, ignored listing per whitelist Jun 8 11:53:17.846: INFO: namespace e2e-tests-pods-w5d4v deletion completed in 6.111078449s • [SLOW TEST:16.280 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:53:17.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:53:18.090: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a495534d-a97e-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002444032), BlockOwnerDeletion:(*bool)(0xc002444033)}} Jun 8 11:53:18.124: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a48673e4-a97e-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002106d7a), BlockOwnerDeletion:(*bool)(0xc002106d7b)}} Jun 8 11:53:18.133: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a486fb88-a97e-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00215a812), BlockOwnerDeletion:(*bool)(0xc00215a813)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:53:23.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2jmqv" for this suite. Jun 8 11:53:29.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:53:29.375: INFO: namespace: e2e-tests-gc-2jmqv, resource: bindings, ignored listing per whitelist Jun 8 11:53:29.442: INFO: namespace e2e-tests-gc-2jmqv deletion completed in 6.259066896s • [SLOW TEST:11.596 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:53:29.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 11:53:29.615: INFO: Creating deployment "nginx-deployment" Jun 8 11:53:29.627: INFO: Waiting for observed generation 1 Jun 8 11:53:31.695: INFO: Waiting for all required pods to come up Jun 8 11:53:31.700: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 8 11:53:45.712: INFO: Waiting for deployment "nginx-deployment" to complete Jun 8 11:53:45.719: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 8 11:53:45.726: INFO: Updating deployment nginx-deployment Jun 8 11:53:45.726: INFO: Waiting for observed generation 2 Jun 8 11:53:47.964: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 8 11:53:47.967: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 8 11:53:48.058: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 8 11:53:48.296: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 8 11:53:48.296: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 8 11:53:48.875: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 8 11:53:48.918: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 8 11:53:48.918: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 8 11:53:48.924: INFO: Updating deployment nginx-deployment Jun 8 11:53:48.924: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 8 11:53:49.211: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 8 11:53:49.221: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 8 11:53:49.490: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x9jjm/deployments/nginx-deployment,UID:ab7b2311-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867836,Generation:3,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-06-08 11:53:46 +0000 UTC 2020-06-08 11:53:29 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-06-08 11:53:49 +0000 UTC 2020-06-08 11:53:49 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 8 11:53:49.640: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x9jjm/replicasets/nginx-deployment-5c98f8fb5,UID:b5156493-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867878,Generation:3,CreationTimestamp:2020-06-08 11:53:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ab7b2311-a97e-11ea-99e8-0242ac110002 0xc0026ed117 0xc0026ed118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 8 11:53:49.640: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 8 11:53:49.641: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x9jjm/replicasets/nginx-deployment-85ddf47c5d,UID:ab7ed1f2-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867879,Generation:3,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ab7b2311-a97e-11ea-99e8-0242ac110002 0xc0026ed1d7 0xc0026ed1d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 8 11:53:49.805: INFO: Pod "nginx-deployment-5c98f8fb5-2g58j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2g58j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-2g58j,UID:b5431e54-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867812,Generation:0,CreationTimestamp:2020-06-08 11:53:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc0026ede20 0xc0026ede21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026edea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026edec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-08 11:53:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.805: INFO: Pod "nginx-deployment-5c98f8fb5-4wznq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4wznq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-4wznq,UID:b518c2fe-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867796,Generation:0,CreationTimestamp:2020-06-08 11:53:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc0026edf87 0xc0026edf88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025126e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025128f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-08 11:53:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.805: INFO: Pod "nginx-deployment-5c98f8fb5-6jgw8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6jgw8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-6jgw8,UID:b74719e8-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867880,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc002512a37 0xc002512a38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002512b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002512b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.806: INFO: Pod "nginx-deployment-5c98f8fb5-bvwbf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bvwbf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-bvwbf,UID:b515f144-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867787,Generation:0,CreationTimestamp:2020-06-08 11:53:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc002512c77 0xc002512c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002512cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002512d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-08 11:53:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.806: INFO: Pod "nginx-deployment-5c98f8fb5-dt8xw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dt8xw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-dt8xw,UID:b733d11a-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867864,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc002512f97 0xc002512f98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002513020} {node.kubernetes.io/unreachable Exists NoExecute 0xc002513130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.806: INFO: Pod "nginx-deployment-5c98f8fb5-gd2cv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gd2cv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-gd2cv,UID:b72b5fa7-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867889,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc002513467 0xc002513468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025134f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002513510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-08 11:53:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.806: INFO: Pod "nginx-deployment-5c98f8fb5-knnjx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-knnjx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-knnjx,UID:b518aeac-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867786,Generation:0,CreationTimestamp:2020-06-08 11:53:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc002513627 0xc002513628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002513800} {node.kubernetes.io/unreachable Exists NoExecute 0xc002513820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-08 11:53:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.806: INFO: Pod "nginx-deployment-5c98f8fb5-kvglt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kvglt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-kvglt,UID:b7340294-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867870,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc0025138f7 0xc0025138f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025139d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025139f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.806: INFO: Pod "nginx-deployment-5c98f8fb5-l95bs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l95bs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-l95bs,UID:b72b4c5c-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867862,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc002513a77 0xc002513a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002513af0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002513b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.806: INFO: Pod "nginx-deployment-5c98f8fb5-vjtzr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vjtzr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-vjtzr,UID:b733ee42-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867871,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc002513bd7 0xc002513bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002513c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002513c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.806: INFO: Pod "nginx-deployment-5c98f8fb5-vx86t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vx86t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-vx86t,UID:b7297036-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867888,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc002513d27 0xc002513d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002513db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002513dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-08 11:53:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.806: INFO: Pod "nginx-deployment-5c98f8fb5-xmqw9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xmqw9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-xmqw9,UID:b733c500-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867873,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc002513f17 0xc002513f18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002513f90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002513fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-5c98f8fb5-z8726" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z8726,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-5c98f8fb5-z8726,UID:b5445817-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867814,Generation:0,CreationTimestamp:2020-06-08 11:53:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b5156493-a97e-11ea-99e8-0242ac110002 0xc0025cc027 0xc0025cc028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cc0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cc0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-08 11:53:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-85ddf47c5d-287k6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-287k6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-287k6,UID:b72b9df0-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867859,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cc197 0xc0025cc198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cc210} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cc230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-85ddf47c5d-4qwg4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4qwg4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-4qwg4,UID:b733f2c3-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867866,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cc2a7 0xc0025cc2a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cc320} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cc340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-85ddf47c5d-4v8wp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4v8wp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-4v8wp,UID:b72b5888-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867848,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cc3b7 0xc0025cc3b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cc430} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cc450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-85ddf47c5d-5tnc6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5tnc6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-5tnc6,UID:b733eacc-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867869,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cc4c7 0xc0025cc4c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cc540} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cc560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-85ddf47c5d-7gc6f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7gc6f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-7gc6f,UID:b72b93b1-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867861,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cc5d7 0xc0025cc5d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cc650} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cc670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-85ddf47c5d-7j222" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7j222,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-7j222,UID:b7298619-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867876,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cc6e7 0xc0025cc6e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cc760} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cc780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-08 11:53:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-85ddf47c5d-8dvnh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8dvnh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-8dvnh,UID:b733eeeb-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867872,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cc837 0xc0025cc838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cc8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cc8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-85ddf47c5d-99hxz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-99hxz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-99hxz,UID:ab8721da-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867715,Generation:0,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cc947 0xc0025cc948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cc9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cc9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.138,StartTime:2020-06-08 11:53:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-08 11:53:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d1825fd845556b2d5146d200a2a883d52e062868c4091e1f43b87535ae5a5d80}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.807: INFO: Pod "nginx-deployment-85ddf47c5d-f8572" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f8572,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-f8572,UID:ab87cb50-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867748,Generation:0,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025ccaa7 0xc0025ccaa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ccb20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ccb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.238,StartTime:2020-06-08 11:53:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-08 11:53:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://579dfa02aeb19fd6cfecf9ba5943ae1e668c785630b76c96711b6bae6e652888}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-gfnxm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gfnxm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-gfnxm,UID:ab8b1de3-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867752,Generation:0,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025ccc07 0xc0025ccc08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ccc80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ccca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.239,StartTime:2020-06-08 11:53:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-08 11:53:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c8d20a9dc058ade339c99ff09469b3182fb924f543d4b151b419bee765367b2a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-jtc2w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jtc2w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-jtc2w,UID:b729715f-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867842,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025ccd67 0xc0025ccd68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ccde0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cce00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-ljhkr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ljhkr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-ljhkr,UID:ab87c653-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867723,Generation:0,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cce77 0xc0025cce78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ccef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ccf10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.139,StartTime:2020-06-08 11:53:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-08 11:53:38 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7b1b2eb03665901c02ef4f0b651c5b1fee8239d23b659f7d06d270eb05f4f8b5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-mwc5g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mwc5g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-mwc5g,UID:ab87c715-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867739,Generation:0,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025ccfd7 0xc0025ccfd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cd050} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cd070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.140,StartTime:2020-06-08 11:53:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-08 11:53:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0ba7bd7b4af6897b038e23c766459ae05ea9bf2ad2caa6833306caf36f60a531}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-nq2b2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nq2b2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-nq2b2,UID:b733e2e2-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867874,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cd167 0xc0025cd168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cd200} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cd220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-snk58" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-snk58,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-snk58,UID:ab872d14-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867724,Generation:0,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cd297 0xc0025cd298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cd340} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cd390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.236,StartTime:2020-06-08 11:53:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-08 11:53:38 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://09b8ebb303abe0cb0526ac8d87a4a676c68b97a396c7d4d89305d0b07ff62474}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-swb5l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-swb5l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-swb5l,UID:b733e31f-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867863,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cd467 0xc0025cd468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cd4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cd500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-t6kqf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t6kqf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-t6kqf,UID:ab87d7b4-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867738,Generation:0,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cd5f7 0xc0025cd5f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cd6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cd6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.237,StartTime:2020-06-08 11:53:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-08 11:53:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bd0a67236859ad6097087fe5b1c92210c4eee9d8207c3e3d09f9480bf7f6994a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-tvj2w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tvj2w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-tvj2w,UID:b72b4e91-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867860,Generation:0,CreationTimestamp:2020-06-08 11:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cd947 0xc0025cd948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cd9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cd9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.808: INFO: Pod "nginx-deployment-85ddf47c5d-tvq27" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tvq27,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-tvq27,UID:ab86ae53-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867716,Generation:0,CreationTimestamp:2020-06-08 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cda97 0xc0025cda98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cdb60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cdb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.235,StartTime:2020-06-08 11:53:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-08 11:53:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://922c9354184c40b9f404e2c7870bf263319ca321d6a25b557850fbf14d1bcad3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 8 11:53:49.809: INFO: Pod "nginx-deployment-85ddf47c5d-xscdb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xscdb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x9jjm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x9jjm/pods/nginx-deployment-85ddf47c5d-xscdb,UID:b7012928-a97e-11ea-99e8-0242ac110002,ResourceVersion:14867877,Generation:0,CreationTimestamp:2020-06-08 11:53:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ab7ed1f2-a97e-11ea-99e8-0242ac110002 0xc0025cdcb7 0xc0025cdcb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkttq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkttq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fkttq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c28010} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c28040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 11:53:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-08 11:53:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:53:49.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-x9jjm" for this suite. Jun 8 11:54:16.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:54:16.151: INFO: namespace: e2e-tests-deployment-x9jjm, resource: bindings, ignored listing per whitelist Jun 8 11:54:16.156: INFO: namespace e2e-tests-deployment-x9jjm deletion completed in 26.335994536s • [SLOW TEST:46.714 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:54:16.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0608 11:54:29.068220 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 8 11:54:29.068: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:54:29.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-dxsmk" for this suite. Jun 8 11:54:37.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:54:37.312: INFO: namespace: e2e-tests-gc-dxsmk, resource: bindings, ignored listing per whitelist Jun 8 11:54:37.339: INFO: namespace e2e-tests-gc-dxsmk deletion completed in 8.098941452s • [SLOW TEST:21.183 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:54:37.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 8 11:54:37.847: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:54:48.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-wtwnh" for this suite. Jun 8 11:54:54.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:54:54.481: INFO: namespace: e2e-tests-init-container-wtwnh, resource: bindings, ignored listing per whitelist Jun 8 11:54:54.505: INFO: namespace e2e-tests-init-container-wtwnh deletion completed in 6.096966158s • [SLOW TEST:17.166 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:54:54.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:54:54.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de26dc13-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-q7z7l" to be "success or failure" Jun 8 11:54:54.636: INFO: Pod "downwardapi-volume-de26dc13-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070548ms Jun 8 11:54:56.660: INFO: Pod "downwardapi-volume-de26dc13-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026369362s Jun 8 11:54:58.701: INFO: Pod "downwardapi-volume-de26dc13-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06774615s STEP: Saw pod success Jun 8 11:54:58.702: INFO: Pod "downwardapi-volume-de26dc13-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:54:58.705: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-de26dc13-a97e-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:54:58.748: INFO: Waiting for pod downwardapi-volume-de26dc13-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:54:58.761: INFO: Pod downwardapi-volume-de26dc13-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:54:58.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q7z7l" for this suite. Jun 8 11:55:04.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:55:04.854: INFO: namespace: e2e-tests-downward-api-q7z7l, resource: bindings, ignored listing per whitelist Jun 8 11:55:04.854: INFO: namespace e2e-tests-downward-api-q7z7l deletion completed in 6.089010677s • [SLOW TEST:10.348 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:55:04.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 8 11:55:04.971: INFO: Waiting up to 5m0s for pod "pod-e44ea312-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-48vrq" to be "success or failure" Jun 8 11:55:04.975: INFO: Pod "pod-e44ea312-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.85178ms Jun 8 11:55:06.978: INFO: Pod "pod-e44ea312-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007563232s Jun 8 11:55:08.983: INFO: Pod "pod-e44ea312-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011981648s STEP: Saw pod success Jun 8 11:55:08.983: INFO: Pod "pod-e44ea312-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:55:08.986: INFO: Trying to get logs from node hunter-worker pod pod-e44ea312-a97e-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:55:09.006: INFO: Waiting for pod pod-e44ea312-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:55:09.023: INFO: Pod pod-e44ea312-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:55:09.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-48vrq" for this suite. Jun 8 11:55:15.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:55:15.054: INFO: namespace: e2e-tests-emptydir-48vrq, resource: bindings, ignored listing per whitelist Jun 8 11:55:15.116: INFO: namespace e2e-tests-emptydir-48vrq deletion completed in 6.090095153s • [SLOW TEST:10.262 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:55:15.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 8 11:55:15.269: INFO: Waiting up to 5m0s for pod "pod-ea73a082-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-fhg4l" to be "success or failure" Jun 8 11:55:15.286: INFO: Pod "pod-ea73a082-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.989054ms Jun 8 11:55:17.290: INFO: Pod "pod-ea73a082-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021098679s Jun 8 11:55:19.295: INFO: Pod "pod-ea73a082-a97e-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.026153317s Jun 8 11:55:21.299: INFO: Pod "pod-ea73a082-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030264281s STEP: Saw pod success Jun 8 11:55:21.299: INFO: Pod "pod-ea73a082-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:55:21.303: INFO: Trying to get logs from node hunter-worker pod pod-ea73a082-a97e-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 11:55:21.339: INFO: Waiting for pod pod-ea73a082-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:55:21.348: INFO: Pod pod-ea73a082-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:55:21.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fhg4l" for this suite. Jun 8 11:55:27.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:55:27.434: INFO: namespace: e2e-tests-emptydir-fhg4l, resource: bindings, ignored listing per whitelist Jun 8 11:55:27.444: INFO: namespace e2e-tests-emptydir-fhg4l deletion completed in 6.092249836s • [SLOW TEST:12.327 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:55:27.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f1cd457e-a97e-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:55:27.607: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f1cdfe81-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-bxlpm" to be "success or failure" Jun 8 11:55:27.646: INFO: Pod "pod-projected-configmaps-f1cdfe81-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 38.428135ms Jun 8 11:55:29.708: INFO: Pod "pod-projected-configmaps-f1cdfe81-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10088582s Jun 8 11:55:31.840: INFO: Pod "pod-projected-configmaps-f1cdfe81-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.232397576s STEP: Saw pod success Jun 8 11:55:31.840: INFO: Pod "pod-projected-configmaps-f1cdfe81-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:55:31.842: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-f1cdfe81-a97e-11ea-978f-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 8 11:55:31.996: INFO: Waiting for pod pod-projected-configmaps-f1cdfe81-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:55:31.999: INFO: Pod pod-projected-configmaps-f1cdfe81-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:55:31.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bxlpm" for this suite. Jun 8 11:55:38.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:55:38.039: INFO: namespace: e2e-tests-projected-bxlpm, resource: bindings, ignored listing per whitelist Jun 8 11:55:38.094: INFO: namespace e2e-tests-projected-bxlpm deletion completed in 6.090432286s • [SLOW TEST:10.650 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:55:38.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 11:55:38.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f81fab69-a97e-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-vv96w" to be "success or failure" Jun 8 11:55:38.222: INFO: Pod "downwardapi-volume-f81fab69-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.73446ms Jun 8 11:55:40.248: INFO: Pod "downwardapi-volume-f81fab69-a97e-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029923657s Jun 8 11:55:42.252: INFO: Pod "downwardapi-volume-f81fab69-a97e-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.034614067s Jun 8 11:55:44.257: INFO: Pod "downwardapi-volume-f81fab69-a97e-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039200246s STEP: Saw pod success Jun 8 11:55:44.257: INFO: Pod "downwardapi-volume-f81fab69-a97e-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:55:44.260: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f81fab69-a97e-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 11:55:44.284: INFO: Waiting for pod downwardapi-volume-f81fab69-a97e-11ea-978f-0242ac110018 to disappear Jun 8 11:55:44.295: INFO: Pod downwardapi-volume-f81fab69-a97e-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:55:44.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vv96w" for this suite. Jun 8 11:55:50.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:55:50.356: INFO: namespace: e2e-tests-downward-api-vv96w, resource: bindings, ignored listing per whitelist Jun 8 11:55:50.402: INFO: namespace e2e-tests-downward-api-vv96w deletion completed in 6.104193135s • [SLOW TEST:12.308 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:55:50.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 8 11:55:50.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4n5fm' Jun 8 11:55:50.716: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 8 11:55:50.716: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jun 8 11:55:52.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-4n5fm' Jun 8 11:55:53.137: INFO: stderr: "" Jun 8 11:55:53.137: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:55:53.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4n5fm" for this suite. Jun 8 11:55:59.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:55:59.337: INFO: namespace: e2e-tests-kubectl-4n5fm, resource: bindings, ignored listing per whitelist Jun 8 11:55:59.428: INFO: namespace e2e-tests-kubectl-4n5fm deletion completed in 6.155689773s • [SLOW TEST:9.025 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:55:59.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-04d53264-a97f-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 11:55:59.555: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04d5c43a-a97f-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-54gsh" to be "success or failure" Jun 8 11:55:59.564: INFO: Pod "pod-projected-configmaps-04d5c43a-a97f-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.27847ms Jun 8 11:56:01.568: INFO: Pod "pod-projected-configmaps-04d5c43a-a97f-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013169037s Jun 8 11:56:03.572: INFO: Pod "pod-projected-configmaps-04d5c43a-a97f-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017036129s STEP: Saw pod success Jun 8 11:56:03.572: INFO: Pod "pod-projected-configmaps-04d5c43a-a97f-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:56:03.574: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-04d5c43a-a97f-11ea-978f-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 8 11:56:03.692: INFO: Waiting for pod pod-projected-configmaps-04d5c43a-a97f-11ea-978f-0242ac110018 to disappear Jun 8 11:56:03.702: INFO: Pod pod-projected-configmaps-04d5c43a-a97f-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:56:03.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-54gsh" for this suite. Jun 8 11:56:09.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:56:09.778: INFO: namespace: e2e-tests-projected-54gsh, resource: bindings, ignored listing per whitelist Jun 8 11:56:09.818: INFO: namespace e2e-tests-projected-54gsh deletion completed in 6.111566573s • [SLOW TEST:10.390 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:56:09.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:57:09.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jmz9f" for this suite. Jun 8 11:57:31.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:57:31.994: INFO: namespace: e2e-tests-container-probe-jmz9f, resource: bindings, ignored listing per whitelist Jun 8 11:57:32.015: INFO: namespace e2e-tests-container-probe-jmz9f deletion completed in 22.085789486s • [SLOW TEST:82.197 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:57:32.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jun 8 11:57:32.181: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:57:32.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b5xbs" for this suite. Jun 8 11:57:38.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:57:38.302: INFO: namespace: e2e-tests-kubectl-b5xbs, resource: bindings, ignored listing per whitelist Jun 8 11:57:38.374: INFO: namespace e2e-tests-kubectl-b5xbs deletion completed in 6.111811722s • [SLOW TEST:6.359 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:57:38.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 8 11:57:38.456: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:57:46.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-f6vmd" for this suite. Jun 8 11:58:08.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:58:08.711: INFO: namespace: e2e-tests-init-container-f6vmd, resource: bindings, ignored listing per whitelist Jun 8 11:58:08.739: INFO: namespace e2e-tests-init-container-f6vmd deletion completed in 22.124700152s • [SLOW TEST:30.365 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:58:08.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-51e7490d-a97f-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:58:08.839: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-51e7e8f8-a97f-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-dfxb8" to be "success or failure" Jun 8 11:58:08.855: INFO: Pod "pod-projected-secrets-51e7e8f8-a97f-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.969854ms Jun 8 11:58:10.859: INFO: Pod "pod-projected-secrets-51e7e8f8-a97f-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019617681s Jun 8 11:58:12.879: INFO: Pod "pod-projected-secrets-51e7e8f8-a97f-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039475429s STEP: Saw pod success Jun 8 11:58:12.879: INFO: Pod "pod-projected-secrets-51e7e8f8-a97f-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:58:12.882: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-51e7e8f8-a97f-11ea-978f-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 8 11:58:12.905: INFO: Waiting for pod pod-projected-secrets-51e7e8f8-a97f-11ea-978f-0242ac110018 to disappear Jun 8 11:58:12.933: INFO: Pod pod-projected-secrets-51e7e8f8-a97f-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:58:12.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dfxb8" for this suite. Jun 8 11:58:18.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:58:18.982: INFO: namespace: e2e-tests-projected-dfxb8, resource: bindings, ignored listing per whitelist Jun 8 11:58:19.030: INFO: namespace e2e-tests-projected-dfxb8 deletion completed in 6.093035976s • [SLOW TEST:10.291 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:58:19.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-77gcf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-77gcf to expose endpoints map[] Jun 8 11:58:19.173: INFO: Get endpoints failed (13.3213ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 8 11:58:20.177: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-77gcf exposes endpoints map[] (1.017578704s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-77gcf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-77gcf to expose endpoints map[pod1:[100]] Jun 8 11:58:23.349: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-77gcf exposes endpoints map[pod1:[100]] (3.165324714s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-77gcf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-77gcf to expose endpoints map[pod1:[100] pod2:[101]] Jun 8 11:58:27.447: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-77gcf exposes endpoints map[pod1:[100] pod2:[101]] (4.094705429s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-77gcf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-77gcf to expose endpoints map[pod2:[101]] Jun 8 11:58:28.467: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-77gcf exposes endpoints map[pod2:[101]] (1.016726105s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-77gcf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-77gcf to expose endpoints map[] Jun 8 11:58:28.484: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-77gcf exposes endpoints map[] (12.06248ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:58:28.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-77gcf" for this suite. Jun 8 11:58:50.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:58:50.622: INFO: namespace: e2e-tests-services-77gcf, resource: bindings, ignored listing per whitelist Jun 8 11:58:50.660: INFO: namespace e2e-tests-services-77gcf deletion completed in 22.11561096s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.630 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:58:50.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-v9mlc.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-v9mlc.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-v9mlc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-v9mlc.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-v9mlc.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-v9mlc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 8 11:58:58.895: INFO: DNS probes using e2e-tests-dns-v9mlc/dns-test-6ae7e353-a97f-11ea-978f-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:58:58.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-v9mlc" for this suite. Jun 8 11:59:04.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:59:04.994: INFO: namespace: e2e-tests-dns-v9mlc, resource: bindings, ignored listing per whitelist Jun 8 11:59:05.012: INFO: namespace e2e-tests-dns-v9mlc deletion completed in 6.076568575s • [SLOW TEST:14.351 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:59:05.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jun 8 11:59:05.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-494g5' Jun 8 11:59:05.355: INFO: stderr: "" Jun 8 11:59:05.355: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jun 8 11:59:06.358: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:59:06.359: INFO: Found 0 / 1 Jun 8 11:59:07.580: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:59:07.580: INFO: Found 0 / 1 Jun 8 11:59:08.360: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:59:08.360: INFO: Found 0 / 1 Jun 8 11:59:09.359: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:59:09.359: INFO: Found 0 / 1 Jun 8 11:59:10.359: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:59:10.359: INFO: Found 1 / 1 Jun 8 11:59:10.359: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 8 11:59:10.362: INFO: Selector matched 1 pods for map[app:redis] Jun 8 11:59:10.362: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 8 11:59:10.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pf8wk redis-master --namespace=e2e-tests-kubectl-494g5' Jun 8 11:59:10.478: INFO: stderr: "" Jun 8 11:59:10.478: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 Jun 11:59:08.936 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Jun 11:59:08.936 # Server started, Redis version 3.2.12\n1:M 08 Jun 11:59:08.936 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Jun 11:59:08.936 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 8 11:59:10.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pf8wk redis-master --namespace=e2e-tests-kubectl-494g5 --tail=1' Jun 8 11:59:10.859: INFO: stderr: "" Jun 8 11:59:10.859: INFO: stdout: "1:M 08 Jun 11:59:08.936 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 8 11:59:10.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pf8wk redis-master --namespace=e2e-tests-kubectl-494g5 --limit-bytes=1' Jun 8 11:59:10.994: INFO: stderr: "" Jun 8 11:59:10.994: INFO: stdout: " " STEP: exposing timestamps Jun 8 11:59:10.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pf8wk redis-master --namespace=e2e-tests-kubectl-494g5 --tail=1 --timestamps' Jun 8 11:59:11.101: INFO: stderr: "" Jun 8 11:59:11.101: INFO: stdout: "2020-06-08T11:59:08.937334493Z 1:M 08 Jun 11:59:08.936 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 8 11:59:13.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pf8wk redis-master --namespace=e2e-tests-kubectl-494g5 --since=1s' Jun 8 11:59:13.719: INFO: stderr: "" Jun 8 11:59:13.719: INFO: stdout: "" Jun 8 11:59:13.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pf8wk redis-master --namespace=e2e-tests-kubectl-494g5 --since=24h' Jun 8 11:59:13.827: INFO: stderr: "" Jun 8 11:59:13.827: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 Jun 11:59:08.936 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Jun 11:59:08.936 # Server started, Redis version 3.2.12\n1:M 08 Jun 11:59:08.936 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Jun 11:59:08.936 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jun 8 11:59:13.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-494g5' Jun 8 11:59:14.119: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 11:59:14.120: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 8 11:59:14.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-494g5' Jun 8 11:59:14.314: INFO: stderr: "No resources found.\n" Jun 8 11:59:14.314: INFO: stdout: "" Jun 8 11:59:14.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-494g5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 8 11:59:14.468: INFO: stderr: "" Jun 8 11:59:14.468: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:59:14.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-494g5" for this suite. Jun 8 11:59:36.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:59:36.745: INFO: namespace: e2e-tests-kubectl-494g5, resource: bindings, ignored listing per whitelist Jun 8 11:59:36.787: INFO: namespace e2e-tests-kubectl-494g5 deletion completed in 22.315998682s • [SLOW TEST:31.775 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:59:36.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-866ab6d5-a97f-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 11:59:36.949: INFO: Waiting up to 5m0s for pod "pod-secrets-866b3933-a97f-11ea-978f-0242ac110018" in namespace "e2e-tests-secrets-jgxdw" to be "success or failure" Jun 8 11:59:36.967: INFO: Pod "pod-secrets-866b3933-a97f-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.250048ms Jun 8 11:59:38.972: INFO: Pod "pod-secrets-866b3933-a97f-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023017632s Jun 8 11:59:40.976: INFO: Pod "pod-secrets-866b3933-a97f-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026917194s Jun 8 11:59:42.980: INFO: Pod "pod-secrets-866b3933-a97f-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030961242s STEP: Saw pod success Jun 8 11:59:42.980: INFO: Pod "pod-secrets-866b3933-a97f-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 11:59:42.983: INFO: Trying to get logs from node hunter-worker pod pod-secrets-866b3933-a97f-11ea-978f-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 8 11:59:43.023: INFO: Waiting for pod pod-secrets-866b3933-a97f-11ea-978f-0242ac110018 to disappear Jun 8 11:59:43.043: INFO: Pod pod-secrets-866b3933-a97f-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 11:59:43.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jgxdw" for this suite. Jun 8 11:59:49.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 11:59:49.262: INFO: namespace: e2e-tests-secrets-jgxdw, resource: bindings, ignored listing per whitelist Jun 8 11:59:49.270: INFO: namespace e2e-tests-secrets-jgxdw deletion completed in 6.22342253s • [SLOW TEST:12.483 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 11:59:49.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-8e056d3b-a97f-11ea-978f-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-8e056d99-a97f-11ea-978f-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8e056d3b-a97f-11ea-978f-0242ac110018 STEP: Updating configmap cm-test-opt-upd-8e056d99-a97f-11ea-978f-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-8e056db8-a97f-11ea-978f-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:01:16.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-x7s8d" for this suite. Jun 8 12:01:38.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:01:38.771: INFO: namespace: e2e-tests-configmap-x7s8d, resource: bindings, ignored listing per whitelist Jun 8 12:01:38.820: INFO: namespace e2e-tests-configmap-x7s8d deletion completed in 22.090976588s • [SLOW TEST:109.550 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:01:38.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-42lnf Jun 8 12:01:42.979: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-42lnf STEP: checking the pod's current state and verifying that restartCount is present Jun 8 12:01:42.981: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:05:44.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-42lnf" for this suite. Jun 8 12:05:50.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:05:50.857: INFO: namespace: e2e-tests-container-probe-42lnf, resource: bindings, ignored listing per whitelist Jun 8 12:05:50.865: INFO: namespace e2e-tests-container-probe-42lnf deletion completed in 6.086174931s • [SLOW TEST:252.045 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:05:50.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:06:24.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-9g4v8" for this suite. Jun 8 12:06:30.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:06:30.741: INFO: namespace: e2e-tests-container-runtime-9g4v8, resource: bindings, ignored listing per whitelist Jun 8 12:06:30.773: INFO: namespace e2e-tests-container-runtime-9g4v8 deletion completed in 6.095248077s • [SLOW TEST:39.908 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:06:30.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 8 12:06:30.901: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-a,UID:7d282b40-a980-11ea-99e8-0242ac110002,ResourceVersion:14870327,Generation:0,CreationTimestamp:2020-06-08 12:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 8 12:06:30.901: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-a,UID:7d282b40-a980-11ea-99e8-0242ac110002,ResourceVersion:14870327,Generation:0,CreationTimestamp:2020-06-08 12:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 8 12:06:40.909: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-a,UID:7d282b40-a980-11ea-99e8-0242ac110002,ResourceVersion:14870347,Generation:0,CreationTimestamp:2020-06-08 12:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 8 12:06:40.910: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-a,UID:7d282b40-a980-11ea-99e8-0242ac110002,ResourceVersion:14870347,Generation:0,CreationTimestamp:2020-06-08 12:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 8 12:06:50.916: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-a,UID:7d282b40-a980-11ea-99e8-0242ac110002,ResourceVersion:14870367,Generation:0,CreationTimestamp:2020-06-08 12:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 8 12:06:50.916: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-a,UID:7d282b40-a980-11ea-99e8-0242ac110002,ResourceVersion:14870367,Generation:0,CreationTimestamp:2020-06-08 12:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 8 12:07:00.925: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-a,UID:7d282b40-a980-11ea-99e8-0242ac110002,ResourceVersion:14870387,Generation:0,CreationTimestamp:2020-06-08 12:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 8 12:07:00.925: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-a,UID:7d282b40-a980-11ea-99e8-0242ac110002,ResourceVersion:14870387,Generation:0,CreationTimestamp:2020-06-08 12:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 8 12:07:10.986: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-b,UID:9505358f-a980-11ea-99e8-0242ac110002,ResourceVersion:14870407,Generation:0,CreationTimestamp:2020-06-08 12:07:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 8 12:07:10.986: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-b,UID:9505358f-a980-11ea-99e8-0242ac110002,ResourceVersion:14870407,Generation:0,CreationTimestamp:2020-06-08 12:07:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 8 12:07:20.992: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-b,UID:9505358f-a980-11ea-99e8-0242ac110002,ResourceVersion:14870427,Generation:0,CreationTimestamp:2020-06-08 12:07:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 8 12:07:20.992: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4l78z,SelfLink:/api/v1/namespaces/e2e-tests-watch-4l78z/configmaps/e2e-watch-test-configmap-b,UID:9505358f-a980-11ea-99e8-0242ac110002,ResourceVersion:14870427,Generation:0,CreationTimestamp:2020-06-08 12:07:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:07:30.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-4l78z" for this suite. Jun 8 12:07:37.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:07:37.117: INFO: namespace: e2e-tests-watch-4l78z, resource: bindings, ignored listing per whitelist Jun 8 12:07:37.127: INFO: namespace e2e-tests-watch-4l78z deletion completed in 6.12969649s • [SLOW TEST:66.353 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:07:37.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-dtm54 I0608 12:07:37.288687 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-dtm54, replica count: 1 I0608 12:07:38.339131 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0608 12:07:39.339351 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0608 12:07:40.339534 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 8 12:07:40.532: INFO: Created: latency-svc-lmkdg Jun 8 12:07:40.567: INFO: Got endpoints: latency-svc-lmkdg [127.738161ms] Jun 8 12:07:40.702: INFO: Created: latency-svc-svn7t Jun 8 12:07:40.705: INFO: Got endpoints: latency-svc-svn7t [137.447708ms] Jun 8 12:07:40.758: INFO: Created: latency-svc-lkct9 Jun 8 12:07:40.783: INFO: Got endpoints: latency-svc-lkct9 [215.727772ms] Jun 8 12:07:40.854: INFO: Created: latency-svc-tnbwq Jun 8 12:07:40.905: INFO: Created: latency-svc-6x7l8 Jun 8 12:07:40.907: INFO: Got endpoints: latency-svc-tnbwq [339.333823ms] Jun 8 12:07:40.918: INFO: Got endpoints: latency-svc-6x7l8 [351.07874ms] Jun 8 12:07:40.949: INFO: Created: latency-svc-2vmtg Jun 8 12:07:41.014: INFO: Got endpoints: latency-svc-2vmtg [446.635878ms] Jun 8 12:07:41.047: INFO: Created: latency-svc-wbjcv Jun 8 12:07:41.057: INFO: Got endpoints: latency-svc-wbjcv [489.344959ms] Jun 8 12:07:41.075: INFO: Created: latency-svc-pqthl Jun 8 12:07:41.093: INFO: Got endpoints: latency-svc-pqthl [525.754172ms] Jun 8 12:07:41.160: INFO: Created: latency-svc-tzm77 Jun 8 12:07:41.175: INFO: Got endpoints: latency-svc-tzm77 [607.225337ms] Jun 8 12:07:41.217: INFO: Created: latency-svc-mp9vx Jun 8 12:07:41.238: INFO: Got endpoints: latency-svc-mp9vx [670.734731ms] Jun 8 12:07:41.295: INFO: Created: latency-svc-5v9rc Jun 8 12:07:41.301: INFO: Got endpoints: latency-svc-5v9rc [733.695123ms] Jun 8 12:07:41.335: INFO: Created: latency-svc-9p4x9 Jun 8 12:07:41.350: INFO: Got endpoints: latency-svc-9p4x9 [782.491578ms] Jun 8 12:07:41.377: INFO: Created: latency-svc-sdwwf Jun 8 12:07:41.463: INFO: Got endpoints: latency-svc-sdwwf [895.089668ms] Jun 8 12:07:41.467: INFO: Created: latency-svc-csq5v Jun 8 12:07:41.488: INFO: Got endpoints: latency-svc-csq5v [920.235858ms] Jun 8 12:07:41.540: INFO: Created: latency-svc-n6rf9 Jun 8 12:07:41.548: INFO: Got endpoints: latency-svc-n6rf9 [981.011624ms] Jun 8 12:07:41.593: INFO: Created: latency-svc-8g2rq Jun 8 12:07:41.602: INFO: Got endpoints: latency-svc-8g2rq [1.034599082s] Jun 8 12:07:41.628: INFO: Created: latency-svc-6lkx8 Jun 8 12:07:41.639: INFO: Got endpoints: latency-svc-6lkx8 [933.807845ms] Jun 8 12:07:41.654: INFO: Created: latency-svc-8m928 Jun 8 12:07:41.663: INFO: Got endpoints: latency-svc-8m928 [879.75966ms] Jun 8 12:07:41.712: INFO: Created: latency-svc-wmbc2 Jun 8 12:07:41.720: INFO: Got endpoints: latency-svc-wmbc2 [812.676365ms] Jun 8 12:07:41.750: INFO: Created: latency-svc-hqmrs Jun 8 12:07:41.756: INFO: Got endpoints: latency-svc-hqmrs [838.002394ms] Jun 8 12:07:41.779: INFO: Created: latency-svc-gggpv Jun 8 12:07:41.793: INFO: Got endpoints: latency-svc-gggpv [779.158155ms] Jun 8 12:07:41.857: INFO: Created: latency-svc-6jl88 Jun 8 12:07:41.870: INFO: Got endpoints: latency-svc-6jl88 [812.61529ms] Jun 8 12:07:41.912: INFO: Created: latency-svc-rlqpk Jun 8 12:07:41.926: INFO: Got endpoints: latency-svc-rlqpk [832.697553ms] Jun 8 12:07:41.995: INFO: Created: latency-svc-g477c Jun 8 12:07:41.998: INFO: Got endpoints: latency-svc-g477c [823.014408ms] Jun 8 12:07:42.067: INFO: Created: latency-svc-vn2xf Jun 8 12:07:42.082: INFO: Got endpoints: latency-svc-vn2xf [843.361663ms] Jun 8 12:07:42.139: INFO: Created: latency-svc-ntrft Jun 8 12:07:42.153: INFO: Got endpoints: latency-svc-ntrft [852.044143ms] Jun 8 12:07:42.200: INFO: Created: latency-svc-7d5lx Jun 8 12:07:42.214: INFO: Got endpoints: latency-svc-7d5lx [864.198276ms] Jun 8 12:07:42.289: INFO: Created: latency-svc-24st6 Jun 8 12:07:42.292: INFO: Got endpoints: latency-svc-24st6 [829.681239ms] Jun 8 12:07:42.357: INFO: Created: latency-svc-j2jrq Jun 8 12:07:42.383: INFO: Got endpoints: latency-svc-j2jrq [894.999918ms] Jun 8 12:07:42.553: INFO: Created: latency-svc-mjmqk Jun 8 12:07:42.582: INFO: Got endpoints: latency-svc-mjmqk [1.033997898s] Jun 8 12:07:42.609: INFO: Created: latency-svc-9s4qq Jun 8 12:07:42.623: INFO: Got endpoints: latency-svc-9s4qq [1.020973683s] Jun 8 12:07:42.690: INFO: Created: latency-svc-9d7rm Jun 8 12:07:42.875: INFO: Got endpoints: latency-svc-9d7rm [1.236734618s] Jun 8 12:07:42.970: INFO: Created: latency-svc-djlll Jun 8 12:07:43.061: INFO: Got endpoints: latency-svc-djlll [1.397712137s] Jun 8 12:07:43.085: INFO: Created: latency-svc-4c7tk Jun 8 12:07:43.108: INFO: Got endpoints: latency-svc-4c7tk [1.388366262s] Jun 8 12:07:43.143: INFO: Created: latency-svc-mrcw9 Jun 8 12:07:43.294: INFO: Got endpoints: latency-svc-mrcw9 [1.537792763s] Jun 8 12:07:43.297: INFO: Created: latency-svc-6fjbp Jun 8 12:07:43.314: INFO: Got endpoints: latency-svc-6fjbp [1.520694688s] Jun 8 12:07:43.377: INFO: Created: latency-svc-5wkzb Jun 8 12:07:43.392: INFO: Got endpoints: latency-svc-5wkzb [1.522149869s] Jun 8 12:07:43.458: INFO: Created: latency-svc-8qfgb Jun 8 12:07:43.470: INFO: Got endpoints: latency-svc-8qfgb [1.544173693s] Jun 8 12:07:43.516: INFO: Created: latency-svc-k7bl8 Jun 8 12:07:43.570: INFO: Got endpoints: latency-svc-k7bl8 [1.571781257s] Jun 8 12:07:43.595: INFO: Created: latency-svc-jz9m7 Jun 8 12:07:43.627: INFO: Got endpoints: latency-svc-jz9m7 [1.545133578s] Jun 8 12:07:43.655: INFO: Created: latency-svc-z66qt Jun 8 12:07:43.779: INFO: Got endpoints: latency-svc-z66qt [1.625920307s] Jun 8 12:07:43.806: INFO: Created: latency-svc-r7v4t Jun 8 12:07:43.859: INFO: Got endpoints: latency-svc-r7v4t [1.644916834s] Jun 8 12:07:44.007: INFO: Created: latency-svc-ztknj Jun 8 12:07:44.024: INFO: Got endpoints: latency-svc-ztknj [1.732095604s] Jun 8 12:07:44.080: INFO: Created: latency-svc-h7cmt Jun 8 12:07:44.247: INFO: Got endpoints: latency-svc-h7cmt [1.864315242s] Jun 8 12:07:44.303: INFO: Created: latency-svc-2pj2r Jun 8 12:07:44.456: INFO: Got endpoints: latency-svc-2pj2r [1.873404858s] Jun 8 12:07:44.488: INFO: Created: latency-svc-wbg7t Jun 8 12:07:44.527: INFO: Got endpoints: latency-svc-wbg7t [1.903860866s] Jun 8 12:07:44.733: INFO: Created: latency-svc-wrpfj Jun 8 12:07:44.899: INFO: Got endpoints: latency-svc-wrpfj [2.023197149s] Jun 8 12:07:44.900: INFO: Created: latency-svc-kwbnt Jun 8 12:07:44.903: INFO: Got endpoints: latency-svc-kwbnt [1.84179288s] Jun 8 12:07:45.109: INFO: Created: latency-svc-p5p8f Jun 8 12:07:45.170: INFO: Got endpoints: latency-svc-p5p8f [2.061476908s] Jun 8 12:07:45.439: INFO: Created: latency-svc-dqzrd Jun 8 12:07:45.442: INFO: Got endpoints: latency-svc-dqzrd [2.147819284s] Jun 8 12:07:45.667: INFO: Created: latency-svc-l6fx8 Jun 8 12:07:45.733: INFO: Got endpoints: latency-svc-l6fx8 [2.419060615s] Jun 8 12:07:45.930: INFO: Created: latency-svc-55sr7 Jun 8 12:07:45.967: INFO: Got endpoints: latency-svc-55sr7 [2.57490914s] Jun 8 12:07:46.170: INFO: Created: latency-svc-hmn26 Jun 8 12:07:46.224: INFO: Got endpoints: latency-svc-hmn26 [2.753775818s] Jun 8 12:07:46.382: INFO: Created: latency-svc-nx46c Jun 8 12:07:46.518: INFO: Got endpoints: latency-svc-nx46c [2.948234855s] Jun 8 12:07:46.729: INFO: Created: latency-svc-kjvlx Jun 8 12:07:46.770: INFO: Got endpoints: latency-svc-kjvlx [3.143264161s] Jun 8 12:07:46.956: INFO: Created: latency-svc-fgmqp Jun 8 12:07:46.986: INFO: Got endpoints: latency-svc-fgmqp [3.206919509s] Jun 8 12:07:48.032: INFO: Created: latency-svc-zrjfc Jun 8 12:07:48.756: INFO: Got endpoints: latency-svc-zrjfc [4.89658113s] Jun 8 12:07:48.761: INFO: Created: latency-svc-9tkmd Jun 8 12:07:48.966: INFO: Got endpoints: latency-svc-9tkmd [4.941642519s] Jun 8 12:07:48.967: INFO: Created: latency-svc-nljmg Jun 8 12:07:49.227: INFO: Got endpoints: latency-svc-nljmg [4.979866694s] Jun 8 12:07:49.493: INFO: Created: latency-svc-rvpq6 Jun 8 12:07:49.557: INFO: Got endpoints: latency-svc-rvpq6 [5.100932029s] Jun 8 12:07:49.690: INFO: Created: latency-svc-cmllm Jun 8 12:07:49.707: INFO: Got endpoints: latency-svc-cmllm [5.179713761s] Jun 8 12:07:49.757: INFO: Created: latency-svc-lpj9d Jun 8 12:07:49.869: INFO: Got endpoints: latency-svc-lpj9d [4.970631037s] Jun 8 12:07:49.877: INFO: Created: latency-svc-qkhmp Jun 8 12:07:49.941: INFO: Got endpoints: latency-svc-qkhmp [5.03850965s] Jun 8 12:07:50.151: INFO: Created: latency-svc-gm8hl Jun 8 12:07:50.153: INFO: Got endpoints: latency-svc-gm8hl [4.983321294s] Jun 8 12:07:50.405: INFO: Created: latency-svc-5xflr Jun 8 12:07:50.469: INFO: Got endpoints: latency-svc-5xflr [5.027160377s] Jun 8 12:07:50.784: INFO: Created: latency-svc-pdnzx Jun 8 12:07:51.068: INFO: Created: latency-svc-5tstz Jun 8 12:07:51.464: INFO: Got endpoints: latency-svc-pdnzx [5.731000091s] Jun 8 12:07:51.470: INFO: Created: latency-svc-6x6sh Jun 8 12:07:51.482: INFO: Got endpoints: latency-svc-6x6sh [5.258163892s] Jun 8 12:07:51.787: INFO: Got endpoints: latency-svc-5tstz [5.819957012s] Jun 8 12:07:51.788: INFO: Created: latency-svc-5g62s Jun 8 12:07:51.791: INFO: Got endpoints: latency-svc-5g62s [5.272707164s] Jun 8 12:07:51.989: INFO: Created: latency-svc-prhf8 Jun 8 12:07:52.004: INFO: Got endpoints: latency-svc-prhf8 [5.233348323s] Jun 8 12:07:52.278: INFO: Created: latency-svc-296dk Jun 8 12:07:52.280: INFO: Got endpoints: latency-svc-296dk [5.293760178s] Jun 8 12:07:52.499: INFO: Created: latency-svc-9g6jr Jun 8 12:07:52.588: INFO: Got endpoints: latency-svc-9g6jr [3.831838713s] Jun 8 12:07:52.720: INFO: Created: latency-svc-ppc7g Jun 8 12:07:52.811: INFO: Got endpoints: latency-svc-ppc7g [3.844636348s] Jun 8 12:07:52.922: INFO: Created: latency-svc-kpv8n Jun 8 12:07:52.989: INFO: Got endpoints: latency-svc-kpv8n [3.761623541s] Jun 8 12:07:53.212: INFO: Created: latency-svc-ztrgk Jun 8 12:07:53.215: INFO: Got endpoints: latency-svc-ztrgk [3.658066171s] Jun 8 12:07:53.457: INFO: Created: latency-svc-clxzw Jun 8 12:07:53.477: INFO: Got endpoints: latency-svc-clxzw [3.770019554s] Jun 8 12:07:53.715: INFO: Created: latency-svc-grszw Jun 8 12:07:54.074: INFO: Got endpoints: latency-svc-grszw [4.204250932s] Jun 8 12:07:54.078: INFO: Created: latency-svc-ddvpn Jun 8 12:07:54.125: INFO: Got endpoints: latency-svc-ddvpn [4.183836511s] Jun 8 12:07:54.283: INFO: Created: latency-svc-smz2s Jun 8 12:07:54.329: INFO: Got endpoints: latency-svc-smz2s [4.175992665s] Jun 8 12:07:54.595: INFO: Created: latency-svc-r7qkv Jun 8 12:07:54.600: INFO: Got endpoints: latency-svc-r7qkv [4.130088127s] Jun 8 12:07:54.892: INFO: Created: latency-svc-mc5md Jun 8 12:07:55.176: INFO: Got endpoints: latency-svc-mc5md [3.711493083s] Jun 8 12:07:55.439: INFO: Created: latency-svc-fsf8f Jun 8 12:07:55.462: INFO: Got endpoints: latency-svc-fsf8f [3.979391482s] Jun 8 12:07:55.758: INFO: Created: latency-svc-v86qk Jun 8 12:07:55.760: INFO: Got endpoints: latency-svc-v86qk [3.973178952s] Jun 8 12:07:56.020: INFO: Created: latency-svc-dn2sx Jun 8 12:07:56.049: INFO: Got endpoints: latency-svc-dn2sx [4.258459959s] Jun 8 12:07:56.092: INFO: Created: latency-svc-wfjx2 Jun 8 12:07:56.427: INFO: Got endpoints: latency-svc-wfjx2 [4.422753132s] Jun 8 12:07:56.492: INFO: Created: latency-svc-ttq46 Jun 8 12:07:56.684: INFO: Got endpoints: latency-svc-ttq46 [4.404370911s] Jun 8 12:07:56.900: INFO: Created: latency-svc-v6bjk Jun 8 12:07:56.938: INFO: Got endpoints: latency-svc-v6bjk [4.349911183s] Jun 8 12:07:57.152: INFO: Created: latency-svc-lrg56 Jun 8 12:07:57.182: INFO: Got endpoints: latency-svc-lrg56 [4.366550805s] Jun 8 12:07:57.243: INFO: Created: latency-svc-d7nsf Jun 8 12:07:57.330: INFO: Got endpoints: latency-svc-d7nsf [4.341489655s] Jun 8 12:07:57.430: INFO: Created: latency-svc-vkr9c Jun 8 12:07:57.540: INFO: Got endpoints: latency-svc-vkr9c [4.324902997s] Jun 8 12:07:57.846: INFO: Created: latency-svc-kcjhn Jun 8 12:07:57.850: INFO: Got endpoints: latency-svc-kcjhn [4.373388819s] Jun 8 12:07:58.206: INFO: Created: latency-svc-2tl9x Jun 8 12:07:58.232: INFO: Got endpoints: latency-svc-2tl9x [4.158206265s] Jun 8 12:07:58.745: INFO: Created: latency-svc-km8m9 Jun 8 12:07:58.753: INFO: Got endpoints: latency-svc-km8m9 [4.628190782s] Jun 8 12:07:59.062: INFO: Created: latency-svc-2x7rp Jun 8 12:07:59.410: INFO: Got endpoints: latency-svc-2x7rp [5.08031693s] Jun 8 12:07:59.768: INFO: Created: latency-svc-q7jd9 Jun 8 12:07:59.814: INFO: Got endpoints: latency-svc-q7jd9 [5.214517045s] Jun 8 12:08:00.416: INFO: Created: latency-svc-hrdpd Jun 8 12:08:00.503: INFO: Got endpoints: latency-svc-hrdpd [5.327480851s] Jun 8 12:08:00.840: INFO: Created: latency-svc-5qz2f Jun 8 12:08:00.911: INFO: Got endpoints: latency-svc-5qz2f [5.448999407s] Jun 8 12:08:01.361: INFO: Created: latency-svc-xfvfn Jun 8 12:08:01.456: INFO: Got endpoints: latency-svc-xfvfn [5.696007596s] Jun 8 12:08:01.819: INFO: Created: latency-svc-pwzh2 Jun 8 12:08:02.209: INFO: Created: latency-svc-sf4xp Jun 8 12:08:02.557: INFO: Got endpoints: latency-svc-pwzh2 [6.507190139s] Jun 8 12:08:02.557: INFO: Created: latency-svc-kbpd7 Jun 8 12:08:02.678: INFO: Got endpoints: latency-svc-kbpd7 [5.993339596s] Jun 8 12:08:02.727: INFO: Got endpoints: latency-svc-sf4xp [6.299928535s] Jun 8 12:08:02.918: INFO: Created: latency-svc-sgwr2 Jun 8 12:08:02.922: INFO: Got endpoints: latency-svc-sgwr2 [5.984702106s] Jun 8 12:08:03.190: INFO: Created: latency-svc-752wj Jun 8 12:08:03.360: INFO: Got endpoints: latency-svc-752wj [6.178347061s] Jun 8 12:08:03.733: INFO: Created: latency-svc-8qx2f Jun 8 12:08:03.737: INFO: Got endpoints: latency-svc-8qx2f [6.406242711s] Jun 8 12:08:04.110: INFO: Created: latency-svc-v7z6l Jun 8 12:08:04.155: INFO: Got endpoints: latency-svc-v7z6l [6.614588628s] Jun 8 12:08:04.709: INFO: Created: latency-svc-8sjhw Jun 8 12:08:04.765: INFO: Got endpoints: latency-svc-8sjhw [6.914459377s] Jun 8 12:08:05.452: INFO: Created: latency-svc-4gdx8 Jun 8 12:08:05.470: INFO: Got endpoints: latency-svc-4gdx8 [7.237414246s] Jun 8 12:08:05.907: INFO: Created: latency-svc-wqqmw Jun 8 12:08:06.151: INFO: Got endpoints: latency-svc-wqqmw [7.397551126s] Jun 8 12:08:07.743: INFO: Created: latency-svc-l42rr Jun 8 12:08:08.149: INFO: Got endpoints: latency-svc-l42rr [8.738929333s] Jun 8 12:08:08.543: INFO: Created: latency-svc-hsff7 Jun 8 12:08:08.546: INFO: Got endpoints: latency-svc-hsff7 [8.731461673s] Jun 8 12:08:09.524: INFO: Created: latency-svc-45fqb Jun 8 12:08:09.840: INFO: Got endpoints: latency-svc-45fqb [9.336366062s] Jun 8 12:08:09.844: INFO: Created: latency-svc-t2vxq Jun 8 12:08:09.851: INFO: Got endpoints: latency-svc-t2vxq [8.940445814s] Jun 8 12:08:10.411: INFO: Created: latency-svc-8s9hx Jun 8 12:08:10.702: INFO: Got endpoints: latency-svc-8s9hx [9.246109328s] Jun 8 12:08:10.771: INFO: Created: latency-svc-hfmwq Jun 8 12:08:11.002: INFO: Got endpoints: latency-svc-hfmwq [8.445142567s] Jun 8 12:08:11.212: INFO: Created: latency-svc-tbx29 Jun 8 12:08:11.236: INFO: Got endpoints: latency-svc-tbx29 [8.557743566s] Jun 8 12:08:11.492: INFO: Created: latency-svc-hhlvr Jun 8 12:08:11.571: INFO: Got endpoints: latency-svc-hhlvr [8.844655337s] Jun 8 12:08:11.708: INFO: Created: latency-svc-cb99l Jun 8 12:08:11.941: INFO: Got endpoints: latency-svc-cb99l [9.018999331s] Jun 8 12:08:12.002: INFO: Created: latency-svc-cmcnm Jun 8 12:08:12.265: INFO: Got endpoints: latency-svc-cmcnm [8.904350054s] Jun 8 12:08:12.628: INFO: Created: latency-svc-s7ngf Jun 8 12:08:12.858: INFO: Got endpoints: latency-svc-s7ngf [9.12139034s] Jun 8 12:08:13.146: INFO: Created: latency-svc-c5tnj Jun 8 12:08:13.484: INFO: Got endpoints: latency-svc-c5tnj [9.329151339s] Jun 8 12:08:13.484: INFO: Created: latency-svc-mzhh4 Jun 8 12:08:13.750: INFO: Got endpoints: latency-svc-mzhh4 [8.98514835s] Jun 8 12:08:13.821: INFO: Created: latency-svc-t6b27 Jun 8 12:08:14.008: INFO: Got endpoints: latency-svc-t6b27 [8.538360764s] Jun 8 12:08:14.020: INFO: Created: latency-svc-rzjfc Jun 8 12:08:14.095: INFO: Got endpoints: latency-svc-rzjfc [7.94418112s] Jun 8 12:08:14.296: INFO: Created: latency-svc-jqzj6 Jun 8 12:08:14.366: INFO: Got endpoints: latency-svc-jqzj6 [6.217151887s] Jun 8 12:08:14.584: INFO: Created: latency-svc-qw899 Jun 8 12:08:14.605: INFO: Got endpoints: latency-svc-qw899 [6.059067617s] Jun 8 12:08:14.949: INFO: Created: latency-svc-l8lz5 Jun 8 12:08:14.953: INFO: Got endpoints: latency-svc-l8lz5 [5.113436567s] Jun 8 12:08:15.235: INFO: Created: latency-svc-kwxtz Jun 8 12:08:15.282: INFO: Got endpoints: latency-svc-kwxtz [5.431164229s] Jun 8 12:08:15.518: INFO: Created: latency-svc-5hfjp Jun 8 12:08:15.582: INFO: Got endpoints: latency-svc-5hfjp [4.879778962s] Jun 8 12:08:15.812: INFO: Created: latency-svc-t5ls6 Jun 8 12:08:15.894: INFO: Got endpoints: latency-svc-t5ls6 [4.89202861s] Jun 8 12:08:16.508: INFO: Created: latency-svc-xxp5l Jun 8 12:08:16.511: INFO: Got endpoints: latency-svc-xxp5l [5.275046576s] Jun 8 12:08:17.130: INFO: Created: latency-svc-xknpk Jun 8 12:08:17.216: INFO: Got endpoints: latency-svc-xknpk [5.644959825s] Jun 8 12:08:17.757: INFO: Created: latency-svc-rg4j7 Jun 8 12:08:17.761: INFO: Got endpoints: latency-svc-rg4j7 [5.81940031s] Jun 8 12:08:18.092: INFO: Created: latency-svc-nmgjr Jun 8 12:08:18.403: INFO: Got endpoints: latency-svc-nmgjr [6.137907396s] Jun 8 12:08:18.657: INFO: Created: latency-svc-rtqzv Jun 8 12:08:18.858: INFO: Got endpoints: latency-svc-rtqzv [6.000109598s] Jun 8 12:08:18.866: INFO: Created: latency-svc-sn6nm Jun 8 12:08:18.940: INFO: Got endpoints: latency-svc-sn6nm [5.456010852s] Jun 8 12:08:19.122: INFO: Created: latency-svc-pd6lr Jun 8 12:08:19.126: INFO: Got endpoints: latency-svc-pd6lr [5.375562357s] Jun 8 12:08:19.405: INFO: Created: latency-svc-sqxl6 Jun 8 12:08:19.406: INFO: Got endpoints: latency-svc-sqxl6 [5.39844684s] Jun 8 12:08:20.516: INFO: Created: latency-svc-rwmpl Jun 8 12:08:20.864: INFO: Got endpoints: latency-svc-rwmpl [6.768664967s] Jun 8 12:08:20.947: INFO: Created: latency-svc-tn25g Jun 8 12:08:21.709: INFO: Got endpoints: latency-svc-tn25g [7.343169962s] Jun 8 12:08:22.395: INFO: Created: latency-svc-k27jv Jun 8 12:08:22.882: INFO: Got endpoints: latency-svc-k27jv [8.277435232s] Jun 8 12:08:23.254: INFO: Created: latency-svc-qqr9k Jun 8 12:08:23.256: INFO: Got endpoints: latency-svc-qqr9k [8.302829836s] Jun 8 12:08:23.774: INFO: Created: latency-svc-qxxkt Jun 8 12:08:24.242: INFO: Created: latency-svc-nxs72 Jun 8 12:08:24.505: INFO: Got endpoints: latency-svc-qxxkt [9.222665763s] Jun 8 12:08:24.661: INFO: Created: latency-svc-2w76n Jun 8 12:08:24.734: INFO: Got endpoints: latency-svc-2w76n [8.840242869s] Jun 8 12:08:25.001: INFO: Got endpoints: latency-svc-nxs72 [9.419212458s] Jun 8 12:08:25.093: INFO: Created: latency-svc-rfjjp Jun 8 12:08:25.386: INFO: Got endpoints: latency-svc-rfjjp [8.875062398s] Jun 8 12:08:25.600: INFO: Created: latency-svc-lvgjr Jun 8 12:08:25.631: INFO: Got endpoints: latency-svc-lvgjr [8.414927321s] Jun 8 12:08:25.852: INFO: Created: latency-svc-lfw2m Jun 8 12:08:25.950: INFO: Got endpoints: latency-svc-lfw2m [8.188615529s] Jun 8 12:08:26.211: INFO: Created: latency-svc-r7wjk Jun 8 12:08:26.294: INFO: Got endpoints: latency-svc-r7wjk [7.890936695s] Jun 8 12:08:26.426: INFO: Created: latency-svc-kzf8z Jun 8 12:08:26.466: INFO: Got endpoints: latency-svc-kzf8z [7.607651268s] Jun 8 12:08:26.619: INFO: Created: latency-svc-27ft2 Jun 8 12:08:26.653: INFO: Got endpoints: latency-svc-27ft2 [7.713029024s] Jun 8 12:08:26.846: INFO: Created: latency-svc-52v79 Jun 8 12:08:26.922: INFO: Got endpoints: latency-svc-52v79 [7.796582434s] Jun 8 12:08:27.078: INFO: Created: latency-svc-z7msc Jun 8 12:08:27.114: INFO: Got endpoints: latency-svc-z7msc [7.707407106s] Jun 8 12:08:27.283: INFO: Created: latency-svc-59l2n Jun 8 12:08:27.285: INFO: Got endpoints: latency-svc-59l2n [6.421321471s] Jun 8 12:08:27.631: INFO: Created: latency-svc-f9gf2 Jun 8 12:08:27.642: INFO: Got endpoints: latency-svc-f9gf2 [5.9329545s] Jun 8 12:08:27.850: INFO: Created: latency-svc-cv6r7 Jun 8 12:08:27.851: INFO: Got endpoints: latency-svc-cv6r7 [4.968398948s] Jun 8 12:08:28.140: INFO: Created: latency-svc-878zb Jun 8 12:08:28.182: INFO: Got endpoints: latency-svc-878zb [4.926125684s] Jun 8 12:08:28.482: INFO: Created: latency-svc-rl4c4 Jun 8 12:08:28.676: INFO: Got endpoints: latency-svc-rl4c4 [4.171118269s] Jun 8 12:08:28.883: INFO: Created: latency-svc-2mmzz Jun 8 12:08:28.888: INFO: Got endpoints: latency-svc-2mmzz [4.15366777s] Jun 8 12:08:29.232: INFO: Created: latency-svc-ctxf4 Jun 8 12:08:29.469: INFO: Got endpoints: latency-svc-ctxf4 [4.467101087s] Jun 8 12:08:29.722: INFO: Created: latency-svc-9h8cr Jun 8 12:08:29.726: INFO: Got endpoints: latency-svc-9h8cr [4.339765111s] Jun 8 12:08:30.154: INFO: Created: latency-svc-d4nll Jun 8 12:08:30.492: INFO: Got endpoints: latency-svc-d4nll [4.860946317s] Jun 8 12:08:30.499: INFO: Created: latency-svc-zqrb8 Jun 8 12:08:30.551: INFO: Got endpoints: latency-svc-zqrb8 [4.601664574s] Jun 8 12:08:30.920: INFO: Created: latency-svc-5n7d2 Jun 8 12:08:31.679: INFO: Got endpoints: latency-svc-5n7d2 [5.384858006s] Jun 8 12:08:32.496: INFO: Created: latency-svc-smzjj Jun 8 12:08:32.562: INFO: Got endpoints: latency-svc-smzjj [6.096057424s] Jun 8 12:08:33.316: INFO: Created: latency-svc-xr8r8 Jun 8 12:08:33.319: INFO: Got endpoints: latency-svc-xr8r8 [6.665587507s] Jun 8 12:08:33.776: INFO: Created: latency-svc-t9mp7 Jun 8 12:08:33.780: INFO: Got endpoints: latency-svc-t9mp7 [6.857652219s] Jun 8 12:08:34.130: INFO: Created: latency-svc-csvpg Jun 8 12:08:34.325: INFO: Got endpoints: latency-svc-csvpg [7.210770535s] Jun 8 12:08:34.398: INFO: Created: latency-svc-dxmgq Jun 8 12:08:34.409: INFO: Got endpoints: latency-svc-dxmgq [7.123510982s] Jun 8 12:08:35.114: INFO: Created: latency-svc-2j8n5 Jun 8 12:08:35.967: INFO: Created: latency-svc-jgds8 Jun 8 12:08:36.364: INFO: Got endpoints: latency-svc-2j8n5 [8.721366994s] Jun 8 12:08:36.409: INFO: Got endpoints: latency-svc-jgds8 [8.558506582s] Jun 8 12:08:36.596: INFO: Created: latency-svc-kkrgr Jun 8 12:08:36.613: INFO: Got endpoints: latency-svc-kkrgr [8.431006387s] Jun 8 12:08:36.811: INFO: Created: latency-svc-fxhxc Jun 8 12:08:36.814: INFO: Got endpoints: latency-svc-fxhxc [8.13770967s] Jun 8 12:08:37.093: INFO: Created: latency-svc-dz4vh Jun 8 12:08:37.355: INFO: Got endpoints: latency-svc-dz4vh [8.466983619s] Jun 8 12:08:37.382: INFO: Created: latency-svc-n6w9z Jun 8 12:08:37.446: INFO: Got endpoints: latency-svc-n6w9z [7.977092851s] Jun 8 12:08:37.637: INFO: Created: latency-svc-cvjfv Jun 8 12:08:37.688: INFO: Got endpoints: latency-svc-cvjfv [7.962187946s] Jun 8 12:08:38.008: INFO: Created: latency-svc-n5f8b Jun 8 12:08:38.040: INFO: Got endpoints: latency-svc-n5f8b [7.547700241s] Jun 8 12:08:38.434: INFO: Created: latency-svc-n2sxx Jun 8 12:08:38.696: INFO: Got endpoints: latency-svc-n2sxx [8.144618063s] Jun 8 12:08:38.697: INFO: Created: latency-svc-v2skl Jun 8 12:08:38.772: INFO: Got endpoints: latency-svc-v2skl [7.092813175s] Jun 8 12:08:39.435: INFO: Created: latency-svc-r9thr Jun 8 12:08:40.999: INFO: Got endpoints: latency-svc-r9thr [8.436447967s] Jun 8 12:08:43.511: INFO: Created: latency-svc-6ltpq Jun 8 12:08:44.111: INFO: Got endpoints: latency-svc-6ltpq [10.79263714s] Jun 8 12:08:45.098: INFO: Created: latency-svc-rzrsm Jun 8 12:08:45.344: INFO: Got endpoints: latency-svc-rzrsm [11.563419043s] Jun 8 12:08:45.888: INFO: Created: latency-svc-5gdmp Jun 8 12:08:45.928: INFO: Got endpoints: latency-svc-5gdmp [11.60272218s] Jun 8 12:08:46.435: INFO: Created: latency-svc-2pn8n Jun 8 12:08:47.117: INFO: Created: latency-svc-dlqs2 Jun 8 12:08:47.434: INFO: Got endpoints: latency-svc-2pn8n [13.02451546s] Jun 8 12:08:47.435: INFO: Got endpoints: latency-svc-dlqs2 [11.070892022s] Jun 8 12:08:48.294: INFO: Created: latency-svc-85qwr Jun 8 12:08:49.409: INFO: Got endpoints: latency-svc-85qwr [12.999620409s] Jun 8 12:08:50.525: INFO: Created: latency-svc-xt2s2 Jun 8 12:08:51.057: INFO: Got endpoints: latency-svc-xt2s2 [14.443403485s] Jun 8 12:08:52.055: INFO: Created: latency-svc-bdrjd Jun 8 12:08:52.686: INFO: Got endpoints: latency-svc-bdrjd [15.87131826s] Jun 8 12:08:52.690: INFO: Created: latency-svc-hn5jr Jun 8 12:08:52.763: INFO: Got endpoints: latency-svc-hn5jr [15.408128051s] Jun 8 12:08:53.811: INFO: Created: latency-svc-9jrld Jun 8 12:08:54.320: INFO: Got endpoints: latency-svc-9jrld [16.873591659s] Jun 8 12:08:54.817: INFO: Created: latency-svc-bl7zp Jun 8 12:08:54.819: INFO: Got endpoints: latency-svc-bl7zp [17.131174359s] Jun 8 12:08:55.219: INFO: Created: latency-svc-4tjp5 Jun 8 12:08:55.225: INFO: Got endpoints: latency-svc-4tjp5 [17.185180619s] Jun 8 12:08:55.799: INFO: Created: latency-svc-nvj75 Jun 8 12:08:55.806: INFO: Got endpoints: latency-svc-nvj75 [17.109492769s] Jun 8 12:08:56.350: INFO: Created: latency-svc-q5xcw Jun 8 12:08:56.354: INFO: Got endpoints: latency-svc-q5xcw [17.582589417s] Jun 8 12:08:56.630: INFO: Created: latency-svc-p497z Jun 8 12:08:56.922: INFO: Got endpoints: latency-svc-p497z [15.923431668s] Jun 8 12:08:57.242: INFO: Created: latency-svc-nwf77 Jun 8 12:08:57.498: INFO: Got endpoints: latency-svc-nwf77 [13.386240764s] Jun 8 12:08:57.673: INFO: Created: latency-svc-v9h5q Jun 8 12:08:57.676: INFO: Got endpoints: latency-svc-v9h5q [12.332748682s] Jun 8 12:08:57.859: INFO: Created: latency-svc-njb7r Jun 8 12:08:57.861: INFO: Got endpoints: latency-svc-njb7r [11.933373664s] Jun 8 12:08:57.937: INFO: Created: latency-svc-gwrtm Jun 8 12:08:58.050: INFO: Got endpoints: latency-svc-gwrtm [10.616091877s] Jun 8 12:08:58.053: INFO: Created: latency-svc-bxqzc Jun 8 12:08:58.114: INFO: Got endpoints: latency-svc-bxqzc [10.679483899s] Jun 8 12:08:58.314: INFO: Created: latency-svc-6lpsq Jun 8 12:08:58.318: INFO: Got endpoints: latency-svc-6lpsq [8.908793398s] Jun 8 12:08:58.470: INFO: Created: latency-svc-wlxbs Jun 8 12:08:58.504: INFO: Got endpoints: latency-svc-wlxbs [7.446978623s] Jun 8 12:08:58.504: INFO: Latencies: [137.447708ms 215.727772ms 339.333823ms 351.07874ms 446.635878ms 489.344959ms 525.754172ms 607.225337ms 670.734731ms 733.695123ms 779.158155ms 782.491578ms 812.61529ms 812.676365ms 823.014408ms 829.681239ms 832.697553ms 838.002394ms 843.361663ms 852.044143ms 864.198276ms 879.75966ms 894.999918ms 895.089668ms 920.235858ms 933.807845ms 981.011624ms 1.020973683s 1.033997898s 1.034599082s 1.236734618s 1.388366262s 1.397712137s 1.520694688s 1.522149869s 1.537792763s 1.544173693s 1.545133578s 1.571781257s 1.625920307s 1.644916834s 1.732095604s 1.84179288s 1.864315242s 1.873404858s 1.903860866s 2.023197149s 2.061476908s 2.147819284s 2.419060615s 2.57490914s 2.753775818s 2.948234855s 3.143264161s 3.206919509s 3.658066171s 3.711493083s 3.761623541s 3.770019554s 3.831838713s 3.844636348s 3.973178952s 3.979391482s 4.130088127s 4.15366777s 4.158206265s 4.171118269s 4.175992665s 4.183836511s 4.204250932s 4.258459959s 4.324902997s 4.339765111s 4.341489655s 4.349911183s 4.366550805s 4.373388819s 4.404370911s 4.422753132s 4.467101087s 4.601664574s 4.628190782s 4.860946317s 4.879778962s 4.89202861s 4.89658113s 4.926125684s 4.941642519s 4.968398948s 4.970631037s 4.979866694s 4.983321294s 5.027160377s 5.03850965s 5.08031693s 5.100932029s 5.113436567s 5.179713761s 5.214517045s 5.233348323s 5.258163892s 5.272707164s 5.275046576s 5.293760178s 5.327480851s 5.375562357s 5.384858006s 5.39844684s 5.431164229s 5.448999407s 5.456010852s 5.644959825s 5.696007596s 5.731000091s 5.81940031s 5.819957012s 5.9329545s 5.984702106s 5.993339596s 6.000109598s 6.059067617s 6.096057424s 6.137907396s 6.178347061s 6.217151887s 6.299928535s 6.406242711s 6.421321471s 6.507190139s 6.614588628s 6.665587507s 6.768664967s 6.857652219s 6.914459377s 7.092813175s 7.123510982s 7.210770535s 7.237414246s 7.343169962s 7.397551126s 7.446978623s 7.547700241s 7.607651268s 7.707407106s 7.713029024s 7.796582434s 7.890936695s 7.94418112s 7.962187946s 7.977092851s 8.13770967s 8.144618063s 8.188615529s 8.277435232s 8.302829836s 8.414927321s 8.431006387s 8.436447967s 8.445142567s 8.466983619s 8.538360764s 8.557743566s 8.558506582s 8.721366994s 8.731461673s 8.738929333s 8.840242869s 8.844655337s 8.875062398s 8.904350054s 8.908793398s 8.940445814s 8.98514835s 9.018999331s 9.12139034s 9.222665763s 9.246109328s 9.329151339s 9.336366062s 9.419212458s 10.616091877s 10.679483899s 10.79263714s 11.070892022s 11.563419043s 11.60272218s 11.933373664s 12.332748682s 12.999620409s 13.02451546s 13.386240764s 14.443403485s 15.408128051s 15.87131826s 15.923431668s 16.873591659s 17.109492769s 17.131174359s 17.185180619s 17.582589417s] Jun 8 12:08:58.504: INFO: 50 %ile: 5.258163892s Jun 8 12:08:58.504: INFO: 90 %ile: 10.616091877s Jun 8 12:08:58.504: INFO: 99 %ile: 17.185180619s Jun 8 12:08:58.504: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:08:58.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-dtm54" for this suite. Jun 8 12:09:32.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:09:32.739: INFO: namespace: e2e-tests-svc-latency-dtm54, resource: bindings, ignored listing per whitelist Jun 8 12:09:32.797: INFO: namespace e2e-tests-svc-latency-dtm54 deletion completed in 34.153212871s • [SLOW TEST:115.669 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:09:32.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 8 12:09:32.897: INFO: Waiting up to 5m0s for pod "pod-e9a21879-a980-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-g6qqd" to be "success or failure" Jun 8 12:09:32.900: INFO: Pod "pod-e9a21879-a980-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.302507ms Jun 8 12:09:35.023: INFO: Pod "pod-e9a21879-a980-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126126351s Jun 8 12:09:37.027: INFO: Pod "pod-e9a21879-a980-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130410819s STEP: Saw pod success Jun 8 12:09:37.027: INFO: Pod "pod-e9a21879-a980-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:09:37.030: INFO: Trying to get logs from node hunter-worker pod pod-e9a21879-a980-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 12:09:37.243: INFO: Waiting for pod pod-e9a21879-a980-11ea-978f-0242ac110018 to disappear Jun 8 12:09:37.374: INFO: Pod pod-e9a21879-a980-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:09:37.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-g6qqd" for this suite. Jun 8 12:09:43.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:09:43.488: INFO: namespace: e2e-tests-emptydir-g6qqd, resource: bindings, ignored listing per whitelist Jun 8 12:09:43.515: INFO: namespace e2e-tests-emptydir-g6qqd deletion completed in 6.137399547s • [SLOW TEST:10.718 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:09:43.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 8 12:09:43.610: INFO: Waiting up to 5m0s for pod "pod-f004735f-a980-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-b9spw" to be "success or failure" Jun 8 12:09:43.613: INFO: Pod "pod-f004735f-a980-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.853443ms Jun 8 12:09:45.682: INFO: Pod "pod-f004735f-a980-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071976927s Jun 8 12:09:47.733: INFO: Pod "pod-f004735f-a980-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123678163s STEP: Saw pod success Jun 8 12:09:47.733: INFO: Pod "pod-f004735f-a980-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:09:47.736: INFO: Trying to get logs from node hunter-worker2 pod pod-f004735f-a980-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 12:09:47.766: INFO: Waiting for pod pod-f004735f-a980-11ea-978f-0242ac110018 to disappear Jun 8 12:09:47.919: INFO: Pod pod-f004735f-a980-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:09:47.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-b9spw" for this suite. Jun 8 12:09:53.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:09:54.060: INFO: namespace: e2e-tests-emptydir-b9spw, resource: bindings, ignored listing per whitelist Jun 8 12:09:54.065: INFO: namespace e2e-tests-emptydir-b9spw deletion completed in 6.143199313s • [SLOW TEST:10.550 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:09:54.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 8 12:10:08.227: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:10:08.227: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:08.263485 6 log.go:172] (0xc000df7600) (0xc0021435e0) Create stream I0608 12:10:08.263514 6 log.go:172] (0xc000df7600) (0xc0021435e0) Stream added, broadcasting: 1 I0608 12:10:08.265932 6 log.go:172] (0xc000df7600) Reply frame received for 1 I0608 12:10:08.265989 6 log.go:172] (0xc000df7600) (0xc002200640) Create stream I0608 12:10:08.266006 6 log.go:172] (0xc000df7600) (0xc002200640) Stream added, broadcasting: 3 I0608 12:10:08.267220 6 log.go:172] (0xc000df7600) Reply frame received for 3 I0608 12:10:08.267263 6 log.go:172] (0xc000df7600) (0xc001b89a40) Create stream I0608 12:10:08.267287 6 log.go:172] (0xc000df7600) (0xc001b89a40) Stream added, broadcasting: 5 I0608 12:10:08.268448 6 log.go:172] (0xc000df7600) Reply frame received for 5 I0608 12:10:08.333574 6 log.go:172] (0xc000df7600) Data frame received for 3 I0608 12:10:08.333624 6 log.go:172] (0xc002200640) (3) Data frame handling I0608 12:10:08.333661 6 log.go:172] (0xc002200640) (3) Data frame sent I0608 12:10:08.333681 6 log.go:172] (0xc000df7600) Data frame received for 3 I0608 12:10:08.333702 6 log.go:172] (0xc002200640) (3) Data frame handling I0608 12:10:08.333740 6 log.go:172] (0xc000df7600) Data frame received for 5 I0608 12:10:08.333754 6 log.go:172] (0xc001b89a40) (5) Data frame handling I0608 12:10:08.335123 6 log.go:172] (0xc000df7600) Data frame received for 1 I0608 12:10:08.335152 6 log.go:172] (0xc0021435e0) (1) Data frame handling I0608 12:10:08.335176 6 log.go:172] (0xc0021435e0) (1) Data frame sent I0608 12:10:08.335199 6 log.go:172] (0xc000df7600) (0xc0021435e0) Stream removed, broadcasting: 1 I0608 12:10:08.335225 6 log.go:172] (0xc000df7600) Go away received I0608 12:10:08.335456 6 log.go:172] (0xc000df7600) (0xc0021435e0) Stream removed, broadcasting: 1 I0608 12:10:08.335483 6 log.go:172] (0xc000df7600) (0xc002200640) Stream removed, broadcasting: 3 I0608 12:10:08.335504 6 log.go:172] (0xc000df7600) (0xc001b89a40) Stream removed, broadcasting: 5 Jun 8 12:10:08.335: INFO: Exec stderr: "" Jun 8 12:10:08.335: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:10:08.335: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:08.362210 6 log.go:172] (0xc000e5c370) (0xc001b89d60) Create stream I0608 12:10:08.362236 6 log.go:172] (0xc000e5c370) (0xc001b89d60) Stream added, broadcasting: 1 I0608 12:10:08.363677 6 log.go:172] (0xc000e5c370) Reply frame received for 1 I0608 12:10:08.363717 6 log.go:172] (0xc000e5c370) (0xc002340640) Create stream I0608 12:10:08.363726 6 log.go:172] (0xc000e5c370) (0xc002340640) Stream added, broadcasting: 3 I0608 12:10:08.364536 6 log.go:172] (0xc000e5c370) Reply frame received for 3 I0608 12:10:08.364558 6 log.go:172] (0xc000e5c370) (0xc0023406e0) Create stream I0608 12:10:08.364566 6 log.go:172] (0xc000e5c370) (0xc0023406e0) Stream added, broadcasting: 5 I0608 12:10:08.365527 6 log.go:172] (0xc000e5c370) Reply frame received for 5 I0608 12:10:08.426649 6 log.go:172] (0xc000e5c370) Data frame received for 5 I0608 12:10:08.426671 6 log.go:172] (0xc0023406e0) (5) Data frame handling I0608 12:10:08.426697 6 log.go:172] (0xc000e5c370) Data frame received for 3 I0608 12:10:08.426705 6 log.go:172] (0xc002340640) (3) Data frame handling I0608 12:10:08.426717 6 log.go:172] (0xc002340640) (3) Data frame sent I0608 12:10:08.426725 6 log.go:172] (0xc000e5c370) Data frame received for 3 I0608 12:10:08.426735 6 log.go:172] (0xc002340640) (3) Data frame handling I0608 12:10:08.427841 6 log.go:172] (0xc000e5c370) Data frame received for 1 I0608 12:10:08.427871 6 log.go:172] (0xc001b89d60) (1) Data frame handling I0608 12:10:08.427896 6 log.go:172] (0xc001b89d60) (1) Data frame sent I0608 12:10:08.427929 6 log.go:172] (0xc000e5c370) (0xc001b89d60) Stream removed, broadcasting: 1 I0608 12:10:08.427956 6 log.go:172] (0xc000e5c370) Go away received I0608 12:10:08.428044 6 log.go:172] (0xc000e5c370) (0xc001b89d60) Stream removed, broadcasting: 1 I0608 12:10:08.428078 6 log.go:172] (0xc000e5c370) (0xc002340640) Stream removed, broadcasting: 3 I0608 12:10:08.428106 6 log.go:172] (0xc000e5c370) (0xc0023406e0) Stream removed, broadcasting: 5 Jun 8 12:10:08.428: INFO: Exec stderr: "" Jun 8 12:10:08.428: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:10:08.428: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:08.451848 6 log.go:172] (0xc001f0a2c0) (0xc000af25a0) Create stream I0608 12:10:08.451873 6 log.go:172] (0xc001f0a2c0) (0xc000af25a0) Stream added, broadcasting: 1 I0608 12:10:08.453402 6 log.go:172] (0xc001f0a2c0) Reply frame received for 1 I0608 12:10:08.453426 6 log.go:172] (0xc001f0a2c0) (0xc002340780) Create stream I0608 12:10:08.453434 6 log.go:172] (0xc001f0a2c0) (0xc002340780) Stream added, broadcasting: 3 I0608 12:10:08.454118 6 log.go:172] (0xc001f0a2c0) Reply frame received for 3 I0608 12:10:08.454149 6 log.go:172] (0xc001f0a2c0) (0xc0022006e0) Create stream I0608 12:10:08.454160 6 log.go:172] (0xc001f0a2c0) (0xc0022006e0) Stream added, broadcasting: 5 I0608 12:10:08.454816 6 log.go:172] (0xc001f0a2c0) Reply frame received for 5 I0608 12:10:08.507497 6 log.go:172] (0xc001f0a2c0) Data frame received for 5 I0608 12:10:08.507527 6 log.go:172] (0xc0022006e0) (5) Data frame handling I0608 12:10:08.507563 6 log.go:172] (0xc001f0a2c0) Data frame received for 3 I0608 12:10:08.507574 6 log.go:172] (0xc002340780) (3) Data frame handling I0608 12:10:08.507591 6 log.go:172] (0xc002340780) (3) Data frame sent I0608 12:10:08.507603 6 log.go:172] (0xc001f0a2c0) Data frame received for 3 I0608 12:10:08.507613 6 log.go:172] (0xc002340780) (3) Data frame handling I0608 12:10:08.508702 6 log.go:172] (0xc001f0a2c0) Data frame received for 1 I0608 12:10:08.508717 6 log.go:172] (0xc000af25a0) (1) Data frame handling I0608 12:10:08.508728 6 log.go:172] (0xc000af25a0) (1) Data frame sent I0608 12:10:08.508765 6 log.go:172] (0xc001f0a2c0) (0xc000af25a0) Stream removed, broadcasting: 1 I0608 12:10:08.508838 6 log.go:172] (0xc001f0a2c0) Go away received I0608 12:10:08.508879 6 log.go:172] (0xc001f0a2c0) (0xc000af25a0) Stream removed, broadcasting: 1 I0608 12:10:08.508905 6 log.go:172] (0xc001f0a2c0) (0xc002340780) Stream removed, broadcasting: 3 I0608 12:10:08.508918 6 log.go:172] (0xc001f0a2c0) (0xc0022006e0) Stream removed, broadcasting: 5 Jun 8 12:10:08.508: INFO: Exec stderr: "" Jun 8 12:10:08.508: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:10:08.508: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:08.533045 6 log.go:172] (0xc000e5c840) (0xc000cf81e0) Create stream I0608 12:10:08.533069 6 log.go:172] (0xc000e5c840) (0xc000cf81e0) Stream added, broadcasting: 1 I0608 12:10:08.535445 6 log.go:172] (0xc000e5c840) Reply frame received for 1 I0608 12:10:08.535491 6 log.go:172] (0xc000e5c840) (0xc002200820) Create stream I0608 12:10:08.535502 6 log.go:172] (0xc000e5c840) (0xc002200820) Stream added, broadcasting: 3 I0608 12:10:08.536179 6 log.go:172] (0xc000e5c840) Reply frame received for 3 I0608 12:10:08.536217 6 log.go:172] (0xc000e5c840) (0xc002340820) Create stream I0608 12:10:08.536229 6 log.go:172] (0xc000e5c840) (0xc002340820) Stream added, broadcasting: 5 I0608 12:10:08.536816 6 log.go:172] (0xc000e5c840) Reply frame received for 5 I0608 12:10:08.576530 6 log.go:172] (0xc000e5c840) Data frame received for 5 I0608 12:10:08.576566 6 log.go:172] (0xc002340820) (5) Data frame handling I0608 12:10:08.576593 6 log.go:172] (0xc000e5c840) Data frame received for 3 I0608 12:10:08.576606 6 log.go:172] (0xc002200820) (3) Data frame handling I0608 12:10:08.576620 6 log.go:172] (0xc002200820) (3) Data frame sent I0608 12:10:08.576632 6 log.go:172] (0xc000e5c840) Data frame received for 3 I0608 12:10:08.576643 6 log.go:172] (0xc002200820) (3) Data frame handling I0608 12:10:08.577870 6 log.go:172] (0xc000e5c840) Data frame received for 1 I0608 12:10:08.577889 6 log.go:172] (0xc000cf81e0) (1) Data frame handling I0608 12:10:08.577901 6 log.go:172] (0xc000cf81e0) (1) Data frame sent I0608 12:10:08.577917 6 log.go:172] (0xc000e5c840) (0xc000cf81e0) Stream removed, broadcasting: 1 I0608 12:10:08.577934 6 log.go:172] (0xc000e5c840) Go away received I0608 12:10:08.578047 6 log.go:172] (0xc000e5c840) (0xc000cf81e0) Stream removed, broadcasting: 1 I0608 12:10:08.578065 6 log.go:172] (0xc000e5c840) (0xc002200820) Stream removed, broadcasting: 3 I0608 12:10:08.578078 6 log.go:172] (0xc000e5c840) (0xc002340820) Stream removed, broadcasting: 5 Jun 8 12:10:08.578: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 8 12:10:08.578: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:10:08.578: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:08.604290 6 log.go:172] (0xc0020e82c0) (0xc002200aa0) Create stream I0608 12:10:08.604320 6 log.go:172] (0xc0020e82c0) (0xc002200aa0) Stream added, broadcasting: 1 I0608 12:10:08.605982 6 log.go:172] (0xc0020e82c0) Reply frame received for 1 I0608 12:10:08.606081 6 log.go:172] (0xc0020e82c0) (0xc000af2640) Create stream I0608 12:10:08.606103 6 log.go:172] (0xc0020e82c0) (0xc000af2640) Stream added, broadcasting: 3 I0608 12:10:08.606995 6 log.go:172] (0xc0020e82c0) Reply frame received for 3 I0608 12:10:08.607013 6 log.go:172] (0xc0020e82c0) (0xc000cf8280) Create stream I0608 12:10:08.607026 6 log.go:172] (0xc0020e82c0) (0xc000cf8280) Stream added, broadcasting: 5 I0608 12:10:08.607738 6 log.go:172] (0xc0020e82c0) Reply frame received for 5 I0608 12:10:08.662739 6 log.go:172] (0xc0020e82c0) Data frame received for 5 I0608 12:10:08.662781 6 log.go:172] (0xc000cf8280) (5) Data frame handling I0608 12:10:08.662828 6 log.go:172] (0xc0020e82c0) Data frame received for 3 I0608 12:10:08.662858 6 log.go:172] (0xc000af2640) (3) Data frame handling I0608 12:10:08.662896 6 log.go:172] (0xc000af2640) (3) Data frame sent I0608 12:10:08.662911 6 log.go:172] (0xc0020e82c0) Data frame received for 3 I0608 12:10:08.662924 6 log.go:172] (0xc000af2640) (3) Data frame handling I0608 12:10:08.664082 6 log.go:172] (0xc0020e82c0) Data frame received for 1 I0608 12:10:08.664148 6 log.go:172] (0xc002200aa0) (1) Data frame handling I0608 12:10:08.664180 6 log.go:172] (0xc002200aa0) (1) Data frame sent I0608 12:10:08.664211 6 log.go:172] (0xc0020e82c0) (0xc002200aa0) Stream removed, broadcasting: 1 I0608 12:10:08.664296 6 log.go:172] (0xc0020e82c0) (0xc002200aa0) Stream removed, broadcasting: 1 I0608 12:10:08.664323 6 log.go:172] (0xc0020e82c0) (0xc000af2640) Stream removed, broadcasting: 3 I0608 12:10:08.664341 6 log.go:172] (0xc0020e82c0) (0xc000cf8280) Stream removed, broadcasting: 5 Jun 8 12:10:08.664: INFO: Exec stderr: "" Jun 8 12:10:08.664: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0608 12:10:08.664402 6 log.go:172] (0xc0020e82c0) Go away received Jun 8 12:10:08.664: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:08.695445 6 log.go:172] (0xc000e5cd10) (0xc000cf85a0) Create stream I0608 12:10:08.695470 6 log.go:172] (0xc000e5cd10) (0xc000cf85a0) Stream added, broadcasting: 1 I0608 12:10:08.697473 6 log.go:172] (0xc000e5cd10) Reply frame received for 1 I0608 12:10:08.697510 6 log.go:172] (0xc000e5cd10) (0xc002143680) Create stream I0608 12:10:08.697524 6 log.go:172] (0xc000e5cd10) (0xc002143680) Stream added, broadcasting: 3 I0608 12:10:08.698698 6 log.go:172] (0xc000e5cd10) Reply frame received for 3 I0608 12:10:08.698734 6 log.go:172] (0xc000e5cd10) (0xc000cf8640) Create stream I0608 12:10:08.698753 6 log.go:172] (0xc000e5cd10) (0xc000cf8640) Stream added, broadcasting: 5 I0608 12:10:08.699536 6 log.go:172] (0xc000e5cd10) Reply frame received for 5 I0608 12:10:08.760273 6 log.go:172] (0xc000e5cd10) Data frame received for 5 I0608 12:10:08.760333 6 log.go:172] (0xc000cf8640) (5) Data frame handling I0608 12:10:08.760360 6 log.go:172] (0xc000e5cd10) Data frame received for 3 I0608 12:10:08.760371 6 log.go:172] (0xc002143680) (3) Data frame handling I0608 12:10:08.760388 6 log.go:172] (0xc002143680) (3) Data frame sent I0608 12:10:08.760400 6 log.go:172] (0xc000e5cd10) Data frame received for 3 I0608 12:10:08.760409 6 log.go:172] (0xc002143680) (3) Data frame handling I0608 12:10:08.761593 6 log.go:172] (0xc000e5cd10) Data frame received for 1 I0608 12:10:08.761609 6 log.go:172] (0xc000cf85a0) (1) Data frame handling I0608 12:10:08.761616 6 log.go:172] (0xc000cf85a0) (1) Data frame sent I0608 12:10:08.761629 6 log.go:172] (0xc000e5cd10) (0xc000cf85a0) Stream removed, broadcasting: 1 I0608 12:10:08.761645 6 log.go:172] (0xc000e5cd10) Go away received I0608 12:10:08.761726 6 log.go:172] (0xc000e5cd10) (0xc000cf85a0) Stream removed, broadcasting: 1 I0608 12:10:08.761751 6 log.go:172] (0xc000e5cd10) (0xc002143680) Stream removed, broadcasting: 3 I0608 12:10:08.761764 6 log.go:172] (0xc000e5cd10) (0xc000cf8640) Stream removed, broadcasting: 5 Jun 8 12:10:08.761: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 8 12:10:08.761: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:10:08.761: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:08.783894 6 log.go:172] (0xc001f0a630) (0xc000af2820) Create stream I0608 12:10:08.783920 6 log.go:172] (0xc001f0a630) (0xc000af2820) Stream added, broadcasting: 1 I0608 12:10:08.786530 6 log.go:172] (0xc001f0a630) Reply frame received for 1 I0608 12:10:08.786569 6 log.go:172] (0xc001f0a630) (0xc0023408c0) Create stream I0608 12:10:08.786581 6 log.go:172] (0xc001f0a630) (0xc0023408c0) Stream added, broadcasting: 3 I0608 12:10:08.787366 6 log.go:172] (0xc001f0a630) Reply frame received for 3 I0608 12:10:08.787408 6 log.go:172] (0xc001f0a630) (0xc000af2960) Create stream I0608 12:10:08.787426 6 log.go:172] (0xc001f0a630) (0xc000af2960) Stream added, broadcasting: 5 I0608 12:10:08.788228 6 log.go:172] (0xc001f0a630) Reply frame received for 5 I0608 12:10:08.856386 6 log.go:172] (0xc001f0a630) Data frame received for 5 I0608 12:10:08.856410 6 log.go:172] (0xc000af2960) (5) Data frame handling I0608 12:10:08.856430 6 log.go:172] (0xc001f0a630) Data frame received for 3 I0608 12:10:08.856440 6 log.go:172] (0xc0023408c0) (3) Data frame handling I0608 12:10:08.856450 6 log.go:172] (0xc0023408c0) (3) Data frame sent I0608 12:10:08.856459 6 log.go:172] (0xc001f0a630) Data frame received for 3 I0608 12:10:08.856467 6 log.go:172] (0xc0023408c0) (3) Data frame handling I0608 12:10:08.857835 6 log.go:172] (0xc001f0a630) Data frame received for 1 I0608 12:10:08.857866 6 log.go:172] (0xc000af2820) (1) Data frame handling I0608 12:10:08.857884 6 log.go:172] (0xc000af2820) (1) Data frame sent I0608 12:10:08.857900 6 log.go:172] (0xc001f0a630) (0xc000af2820) Stream removed, broadcasting: 1 I0608 12:10:08.857918 6 log.go:172] (0xc001f0a630) Go away received I0608 12:10:08.858086 6 log.go:172] (0xc001f0a630) (0xc000af2820) Stream removed, broadcasting: 1 I0608 12:10:08.858120 6 log.go:172] (0xc001f0a630) (0xc0023408c0) Stream removed, broadcasting: 3 I0608 12:10:08.858138 6 log.go:172] (0xc001f0a630) (0xc000af2960) Stream removed, broadcasting: 5 Jun 8 12:10:08.858: INFO: Exec stderr: "" Jun 8 12:10:08.858: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:10:08.858: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:08.884331 6 log.go:172] (0xc0020e8790) (0xc002200e60) Create stream I0608 12:10:08.884361 6 log.go:172] (0xc0020e8790) (0xc002200e60) Stream added, broadcasting: 1 I0608 12:10:08.886407 6 log.go:172] (0xc0020e8790) Reply frame received for 1 I0608 12:10:08.886450 6 log.go:172] (0xc0020e8790) (0xc002143860) Create stream I0608 12:10:08.886462 6 log.go:172] (0xc0020e8790) (0xc002143860) Stream added, broadcasting: 3 I0608 12:10:08.887385 6 log.go:172] (0xc0020e8790) Reply frame received for 3 I0608 12:10:08.887412 6 log.go:172] (0xc0020e8790) (0xc002143900) Create stream I0608 12:10:08.887423 6 log.go:172] (0xc0020e8790) (0xc002143900) Stream added, broadcasting: 5 I0608 12:10:08.888199 6 log.go:172] (0xc0020e8790) Reply frame received for 5 I0608 12:10:08.964147 6 log.go:172] (0xc0020e8790) Data frame received for 5 I0608 12:10:08.964191 6 log.go:172] (0xc002143900) (5) Data frame handling I0608 12:10:08.964218 6 log.go:172] (0xc0020e8790) Data frame received for 3 I0608 12:10:08.964229 6 log.go:172] (0xc002143860) (3) Data frame handling I0608 12:10:08.964245 6 log.go:172] (0xc002143860) (3) Data frame sent I0608 12:10:08.964255 6 log.go:172] (0xc0020e8790) Data frame received for 3 I0608 12:10:08.964267 6 log.go:172] (0xc002143860) (3) Data frame handling I0608 12:10:08.965814 6 log.go:172] (0xc0020e8790) Data frame received for 1 I0608 12:10:08.965845 6 log.go:172] (0xc002200e60) (1) Data frame handling I0608 12:10:08.965885 6 log.go:172] (0xc002200e60) (1) Data frame sent I0608 12:10:08.965909 6 log.go:172] (0xc0020e8790) (0xc002200e60) Stream removed, broadcasting: 1 I0608 12:10:08.965933 6 log.go:172] (0xc0020e8790) Go away received I0608 12:10:08.966025 6 log.go:172] (0xc0020e8790) (0xc002200e60) Stream removed, broadcasting: 1 I0608 12:10:08.966053 6 log.go:172] (0xc0020e8790) (0xc002143860) Stream removed, broadcasting: 3 I0608 12:10:08.966067 6 log.go:172] (0xc0020e8790) (0xc002143900) Stream removed, broadcasting: 5 Jun 8 12:10:08.966: INFO: Exec stderr: "" Jun 8 12:10:08.966: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:10:08.966: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:08.997601 6 log.go:172] (0xc001abe2c0) (0xc002340be0) Create stream I0608 12:10:08.997627 6 log.go:172] (0xc001abe2c0) (0xc002340be0) Stream added, broadcasting: 1 I0608 12:10:08.999483 6 log.go:172] (0xc001abe2c0) Reply frame received for 1 I0608 12:10:08.999519 6 log.go:172] (0xc001abe2c0) (0xc000cf86e0) Create stream I0608 12:10:08.999533 6 log.go:172] (0xc001abe2c0) (0xc000cf86e0) Stream added, broadcasting: 3 I0608 12:10:09.000298 6 log.go:172] (0xc001abe2c0) Reply frame received for 3 I0608 12:10:09.000334 6 log.go:172] (0xc001abe2c0) (0xc002340c80) Create stream I0608 12:10:09.000351 6 log.go:172] (0xc001abe2c0) (0xc002340c80) Stream added, broadcasting: 5 I0608 12:10:09.001319 6 log.go:172] (0xc001abe2c0) Reply frame received for 5 I0608 12:10:09.076670 6 log.go:172] (0xc001abe2c0) Data frame received for 5 I0608 12:10:09.076723 6 log.go:172] (0xc002340c80) (5) Data frame handling I0608 12:10:09.076779 6 log.go:172] (0xc001abe2c0) Data frame received for 3 I0608 12:10:09.076801 6 log.go:172] (0xc000cf86e0) (3) Data frame handling I0608 12:10:09.076830 6 log.go:172] (0xc000cf86e0) (3) Data frame sent I0608 12:10:09.076849 6 log.go:172] (0xc001abe2c0) Data frame received for 3 I0608 12:10:09.076866 6 log.go:172] (0xc000cf86e0) (3) Data frame handling I0608 12:10:09.078398 6 log.go:172] (0xc001abe2c0) Data frame received for 1 I0608 12:10:09.078471 6 log.go:172] (0xc002340be0) (1) Data frame handling I0608 12:10:09.078524 6 log.go:172] (0xc002340be0) (1) Data frame sent I0608 12:10:09.078551 6 log.go:172] (0xc001abe2c0) (0xc002340be0) Stream removed, broadcasting: 1 I0608 12:10:09.078577 6 log.go:172] (0xc001abe2c0) Go away received I0608 12:10:09.078664 6 log.go:172] (0xc001abe2c0) (0xc002340be0) Stream removed, broadcasting: 1 I0608 12:10:09.078699 6 log.go:172] (0xc001abe2c0) (0xc000cf86e0) Stream removed, broadcasting: 3 I0608 12:10:09.078760 6 log.go:172] (0xc001abe2c0) (0xc002340c80) Stream removed, broadcasting: 5 Jun 8 12:10:09.078: INFO: Exec stderr: "" Jun 8 12:10:09.078: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-997gr PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:10:09.078: INFO: >>> kubeConfig: /root/.kube/config I0608 12:10:09.111475 6 log.go:172] (0xc001abe790) (0xc002341040) Create stream I0608 12:10:09.111514 6 log.go:172] (0xc001abe790) (0xc002341040) Stream added, broadcasting: 1 I0608 12:10:09.113903 6 log.go:172] (0xc001abe790) Reply frame received for 1 I0608 12:10:09.113935 6 log.go:172] (0xc001abe790) (0xc002200f00) Create stream I0608 12:10:09.113947 6 log.go:172] (0xc001abe790) (0xc002200f00) Stream added, broadcasting: 3 I0608 12:10:09.114923 6 log.go:172] (0xc001abe790) Reply frame received for 3 I0608 12:10:09.114963 6 log.go:172] (0xc001abe790) (0xc000af2a00) Create stream I0608 12:10:09.114974 6 log.go:172] (0xc001abe790) (0xc000af2a00) Stream added, broadcasting: 5 I0608 12:10:09.115863 6 log.go:172] (0xc001abe790) Reply frame received for 5 I0608 12:10:09.170275 6 log.go:172] (0xc001abe790) Data frame received for 5 I0608 12:10:09.170308 6 log.go:172] (0xc000af2a00) (5) Data frame handling I0608 12:10:09.170328 6 log.go:172] (0xc001abe790) Data frame received for 3 I0608 12:10:09.170338 6 log.go:172] (0xc002200f00) (3) Data frame handling I0608 12:10:09.170354 6 log.go:172] (0xc002200f00) (3) Data frame sent I0608 12:10:09.170364 6 log.go:172] (0xc001abe790) Data frame received for 3 I0608 12:10:09.170377 6 log.go:172] (0xc002200f00) (3) Data frame handling I0608 12:10:09.171326 6 log.go:172] (0xc001abe790) Data frame received for 1 I0608 12:10:09.171359 6 log.go:172] (0xc002341040) (1) Data frame handling I0608 12:10:09.171386 6 log.go:172] (0xc002341040) (1) Data frame sent I0608 12:10:09.171417 6 log.go:172] (0xc001abe790) (0xc002341040) Stream removed, broadcasting: 1 I0608 12:10:09.171446 6 log.go:172] (0xc001abe790) Go away received I0608 12:10:09.171627 6 log.go:172] (0xc001abe790) (0xc002341040) Stream removed, broadcasting: 1 I0608 12:10:09.171643 6 log.go:172] (0xc001abe790) (0xc002200f00) Stream removed, broadcasting: 3 I0608 12:10:09.171655 6 log.go:172] (0xc001abe790) (0xc000af2a00) Stream removed, broadcasting: 5 Jun 8 12:10:09.171: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:10:09.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-997gr" for this suite. Jun 8 12:10:53.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:10:53.248: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-997gr, resource: bindings, ignored listing per whitelist Jun 8 12:10:53.283: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-997gr deletion completed in 44.109280278s • [SLOW TEST:59.218 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:10:53.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:10:53.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-78gsq" for this suite. Jun 8 12:10:59.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:10:59.426: INFO: namespace: e2e-tests-services-78gsq, resource: bindings, ignored listing per whitelist Jun 8 12:10:59.466: INFO: namespace e2e-tests-services-78gsq deletion completed in 6.071685082s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.183 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:10:59.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:11:05.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-5r4fv" for this suite. Jun 8 12:11:11.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:11:11.976: INFO: namespace: e2e-tests-namespaces-5r4fv, resource: bindings, ignored listing per whitelist Jun 8 12:11:12.030: INFO: namespace e2e-tests-namespaces-5r4fv deletion completed in 6.115617429s STEP: Destroying namespace "e2e-tests-nsdeletetest-qp9j9" for this suite. Jun 8 12:11:12.032: INFO: Namespace e2e-tests-nsdeletetest-qp9j9 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-tljtk" for this suite. Jun 8 12:11:18.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:11:18.094: INFO: namespace: e2e-tests-nsdeletetest-tljtk, resource: bindings, ignored listing per whitelist Jun 8 12:11:18.122: INFO: namespace e2e-tests-nsdeletetest-tljtk deletion completed in 6.090059184s • [SLOW TEST:18.655 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:11:18.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 12:11:18.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-287013d9-a981-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-wmth9" to be "success or failure" Jun 8 12:11:18.274: INFO: Pod "downwardapi-volume-287013d9-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.561432ms Jun 8 12:11:20.480: INFO: Pod "downwardapi-volume-287013d9-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221244275s Jun 8 12:11:22.492: INFO: Pod "downwardapi-volume-287013d9-a981-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.233474462s STEP: Saw pod success Jun 8 12:11:22.492: INFO: Pod "downwardapi-volume-287013d9-a981-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:11:22.495: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-287013d9-a981-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 12:11:22.532: INFO: Waiting for pod downwardapi-volume-287013d9-a981-11ea-978f-0242ac110018 to disappear Jun 8 12:11:22.551: INFO: Pod downwardapi-volume-287013d9-a981-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:11:22.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wmth9" for this suite. Jun 8 12:11:28.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:11:28.606: INFO: namespace: e2e-tests-projected-wmth9, resource: bindings, ignored listing per whitelist Jun 8 12:11:28.649: INFO: namespace e2e-tests-projected-wmth9 deletion completed in 6.094947297s • [SLOW TEST:10.527 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:11:28.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-47d6 STEP: Creating a pod to test atomic-volume-subpath Jun 8 12:11:28.782: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-47d6" in namespace "e2e-tests-subpath-xj5lg" to be "success or failure" Jun 8 12:11:28.786: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.950041ms Jun 8 12:11:30.790: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007911404s Jun 8 12:11:32.793: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011296485s Jun 8 12:11:34.897: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115163743s Jun 8 12:11:36.899: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Running", Reason="", readiness=false. Elapsed: 8.11746008s Jun 8 12:11:38.903: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Running", Reason="", readiness=false. Elapsed: 10.121149271s Jun 8 12:11:40.906: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Running", Reason="", readiness=false. Elapsed: 12.123986874s Jun 8 12:11:42.927: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Running", Reason="", readiness=false. Elapsed: 14.14561474s Jun 8 12:11:44.931: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Running", Reason="", readiness=false. Elapsed: 16.148752622s Jun 8 12:11:46.934: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Running", Reason="", readiness=false. Elapsed: 18.151903914s Jun 8 12:11:48.938: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Running", Reason="", readiness=false. Elapsed: 20.155935591s Jun 8 12:11:50.942: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Running", Reason="", readiness=false. Elapsed: 22.16028466s Jun 8 12:11:52.945: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Running", Reason="", readiness=false. Elapsed: 24.16303809s Jun 8 12:11:54.948: INFO: Pod "pod-subpath-test-downwardapi-47d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.166010146s STEP: Saw pod success Jun 8 12:11:54.948: INFO: Pod "pod-subpath-test-downwardapi-47d6" satisfied condition "success or failure" Jun 8 12:11:54.950: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-47d6 container test-container-subpath-downwardapi-47d6: STEP: delete the pod Jun 8 12:11:55.033: INFO: Waiting for pod pod-subpath-test-downwardapi-47d6 to disappear Jun 8 12:11:55.095: INFO: Pod pod-subpath-test-downwardapi-47d6 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-47d6 Jun 8 12:11:55.095: INFO: Deleting pod "pod-subpath-test-downwardapi-47d6" in namespace "e2e-tests-subpath-xj5lg" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:11:55.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-xj5lg" for this suite. Jun 8 12:12:01.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:12:01.144: INFO: namespace: e2e-tests-subpath-xj5lg, resource: bindings, ignored listing per whitelist Jun 8 12:12:01.186: INFO: namespace e2e-tests-subpath-xj5lg deletion completed in 6.085481943s • [SLOW TEST:32.536 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:12:01.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 8 12:12:05.802: INFO: Successfully updated pod "annotationupdate4210cda8-a981-11ea-978f-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:12:09.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b6z4x" for this suite. Jun 8 12:12:31.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:12:31.893: INFO: namespace: e2e-tests-projected-b6z4x, resource: bindings, ignored listing per whitelist Jun 8 12:12:31.906: INFO: namespace e2e-tests-projected-b6z4x deletion completed in 22.074388539s • [SLOW TEST:30.720 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:12:31.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-f87r8 Jun 8 12:12:36.021: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-f87r8 STEP: checking the pod's current state and verifying that restartCount is present Jun 8 12:12:36.024: INFO: Initial restart count of pod liveness-http is 0 Jun 8 12:12:52.067: INFO: Restart count of pod e2e-tests-container-probe-f87r8/liveness-http is now 1 (16.042960027s elapsed) Jun 8 12:13:14.108: INFO: Restart count of pod e2e-tests-container-probe-f87r8/liveness-http is now 2 (38.084004911s elapsed) Jun 8 12:13:32.139: INFO: Restart count of pod e2e-tests-container-probe-f87r8/liveness-http is now 3 (56.115384467s elapsed) Jun 8 12:13:52.220: INFO: Restart count of pod e2e-tests-container-probe-f87r8/liveness-http is now 4 (1m16.195912991s elapsed) Jun 8 12:14:52.360: INFO: Restart count of pod e2e-tests-container-probe-f87r8/liveness-http is now 5 (2m16.336450275s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:14:52.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-f87r8" for this suite. Jun 8 12:14:58.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:14:58.441: INFO: namespace: e2e-tests-container-probe-f87r8, resource: bindings, ignored listing per whitelist Jun 8 12:14:58.464: INFO: namespace e2e-tests-container-probe-f87r8 deletion completed in 6.085332257s • [SLOW TEST:146.558 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:14:58.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 8 12:14:58.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:01.447: INFO: stderr: "" Jun 8 12:15:01.447: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 8 12:15:01.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:01.567: INFO: stderr: "" Jun 8 12:15:01.567: INFO: stdout: "update-demo-nautilus-jjz2d update-demo-nautilus-t2c6z " Jun 8 12:15:01.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjz2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:01.703: INFO: stderr: "" Jun 8 12:15:01.703: INFO: stdout: "" Jun 8 12:15:01.703: INFO: update-demo-nautilus-jjz2d is created but not running Jun 8 12:15:06.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:06.814: INFO: stderr: "" Jun 8 12:15:06.814: INFO: stdout: "update-demo-nautilus-jjz2d update-demo-nautilus-t2c6z " Jun 8 12:15:06.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjz2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:06.912: INFO: stderr: "" Jun 8 12:15:06.912: INFO: stdout: "true" Jun 8 12:15:06.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjz2d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:07.009: INFO: stderr: "" Jun 8 12:15:07.009: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 8 12:15:07.009: INFO: validating pod update-demo-nautilus-jjz2d Jun 8 12:15:07.012: INFO: got data: { "image": "nautilus.jpg" } Jun 8 12:15:07.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 8 12:15:07.012: INFO: update-demo-nautilus-jjz2d is verified up and running Jun 8 12:15:07.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t2c6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:07.141: INFO: stderr: "" Jun 8 12:15:07.141: INFO: stdout: "true" Jun 8 12:15:07.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t2c6z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:07.228: INFO: stderr: "" Jun 8 12:15:07.228: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 8 12:15:07.228: INFO: validating pod update-demo-nautilus-t2c6z Jun 8 12:15:07.231: INFO: got data: { "image": "nautilus.jpg" } Jun 8 12:15:07.231: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 8 12:15:07.231: INFO: update-demo-nautilus-t2c6z is verified up and running STEP: using delete to clean up resources Jun 8 12:15:07.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:07.325: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 12:15:07.326: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 8 12:15:07.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-kwrz5' Jun 8 12:15:07.430: INFO: stderr: "No resources found.\n" Jun 8 12:15:07.430: INFO: stdout: "" Jun 8 12:15:07.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-kwrz5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 8 12:15:07.536: INFO: stderr: "" Jun 8 12:15:07.536: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:15:07.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kwrz5" for this suite. Jun 8 12:15:29.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:15:29.589: INFO: namespace: e2e-tests-kubectl-kwrz5, resource: bindings, ignored listing per whitelist Jun 8 12:15:29.638: INFO: namespace e2e-tests-kubectl-kwrz5 deletion completed in 22.098968367s • [SLOW TEST:31.173 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:15:29.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-be5aae4b-a981-11ea-978f-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-be5aae18-a981-11ea-978f-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 8 12:15:29.794: INFO: Waiting up to 5m0s for pod "projected-volume-be5aada8-a981-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-k2h6p" to be "success or failure" Jun 8 12:15:29.813: INFO: Pod "projected-volume-be5aada8-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.649828ms Jun 8 12:15:31.955: INFO: Pod "projected-volume-be5aada8-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161098716s Jun 8 12:15:33.958: INFO: Pod "projected-volume-be5aada8-a981-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163833423s STEP: Saw pod success Jun 8 12:15:33.958: INFO: Pod "projected-volume-be5aada8-a981-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:15:33.960: INFO: Trying to get logs from node hunter-worker pod projected-volume-be5aada8-a981-11ea-978f-0242ac110018 container projected-all-volume-test: STEP: delete the pod Jun 8 12:15:34.021: INFO: Waiting for pod projected-volume-be5aada8-a981-11ea-978f-0242ac110018 to disappear Jun 8 12:15:34.025: INFO: Pod projected-volume-be5aada8-a981-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:15:34.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k2h6p" for this suite. Jun 8 12:15:40.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:15:40.071: INFO: namespace: e2e-tests-projected-k2h6p, resource: bindings, ignored listing per whitelist Jun 8 12:15:40.133: INFO: namespace e2e-tests-projected-k2h6p deletion completed in 6.104210631s • [SLOW TEST:10.495 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:15:40.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0608 12:16:20.343053 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 8 12:16:20.343: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:16:20.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-969km" for this suite. Jun 8 12:16:32.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:16:32.406: INFO: namespace: e2e-tests-gc-969km, resource: bindings, ignored listing per whitelist Jun 8 12:16:32.423: INFO: namespace e2e-tests-gc-969km deletion completed in 12.077572884s • [SLOW TEST:52.290 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:16:32.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jun 8 12:16:32.571: INFO: Waiting up to 5m0s for pod "var-expansion-e3c8a4ca-a981-11ea-978f-0242ac110018" in namespace "e2e-tests-var-expansion-g7mb9" to be "success or failure" Jun 8 12:16:32.580: INFO: Pod "var-expansion-e3c8a4ca-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163098ms Jun 8 12:16:34.584: INFO: Pod "var-expansion-e3c8a4ca-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012872942s Jun 8 12:16:36.588: INFO: Pod "var-expansion-e3c8a4ca-a981-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016748232s STEP: Saw pod success Jun 8 12:16:36.588: INFO: Pod "var-expansion-e3c8a4ca-a981-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:16:36.591: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-e3c8a4ca-a981-11ea-978f-0242ac110018 container dapi-container: STEP: delete the pod Jun 8 12:16:36.790: INFO: Waiting for pod var-expansion-e3c8a4ca-a981-11ea-978f-0242ac110018 to disappear Jun 8 12:16:36.818: INFO: Pod var-expansion-e3c8a4ca-a981-11ea-978f-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:16:36.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-g7mb9" for this suite. Jun 8 12:16:42.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:16:42.841: INFO: namespace: e2e-tests-var-expansion-g7mb9, resource: bindings, ignored listing per whitelist Jun 8 12:16:42.894: INFO: namespace e2e-tests-var-expansion-g7mb9 deletion completed in 6.073210906s • [SLOW TEST:10.471 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:16:42.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-ea075eea-a981-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 12:16:43.066: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea095cca-a981-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-k78kz" to be "success or failure" Jun 8 12:16:43.080: INFO: Pod "pod-projected-secrets-ea095cca-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.927468ms Jun 8 12:16:45.213: INFO: Pod "pod-projected-secrets-ea095cca-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147593346s Jun 8 12:16:47.221: INFO: Pod "pod-projected-secrets-ea095cca-a981-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155782412s STEP: Saw pod success Jun 8 12:16:47.221: INFO: Pod "pod-projected-secrets-ea095cca-a981-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:16:47.224: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-ea095cca-a981-11ea-978f-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 8 12:16:47.406: INFO: Waiting for pod pod-projected-secrets-ea095cca-a981-11ea-978f-0242ac110018 to disappear Jun 8 12:16:47.411: INFO: Pod pod-projected-secrets-ea095cca-a981-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:16:47.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k78kz" for this suite. Jun 8 12:16:53.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:16:53.443: INFO: namespace: e2e-tests-projected-k78kz, resource: bindings, ignored listing per whitelist Jun 8 12:16:53.487: INFO: namespace e2e-tests-projected-k78kz deletion completed in 6.073193561s • [SLOW TEST:10.592 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:16:53.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 8 12:16:53.576: INFO: Waiting up to 5m0s for pod "pod-f04c7925-a981-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-56j4w" to be "success or failure" Jun 8 12:16:53.580: INFO: Pod "pod-f04c7925-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.94594ms Jun 8 12:16:55.583: INFO: Pod "pod-f04c7925-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007621082s Jun 8 12:16:57.588: INFO: Pod "pod-f04c7925-a981-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011799237s STEP: Saw pod success Jun 8 12:16:57.588: INFO: Pod "pod-f04c7925-a981-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:16:57.591: INFO: Trying to get logs from node hunter-worker pod pod-f04c7925-a981-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 12:16:57.611: INFO: Waiting for pod pod-f04c7925-a981-11ea-978f-0242ac110018 to disappear Jun 8 12:16:57.642: INFO: Pod pod-f04c7925-a981-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:16:57.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-56j4w" for this suite. Jun 8 12:17:03.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:17:03.713: INFO: namespace: e2e-tests-emptydir-56j4w, resource: bindings, ignored listing per whitelist Jun 8 12:17:03.772: INFO: namespace e2e-tests-emptydir-56j4w deletion completed in 6.127096449s • [SLOW TEST:10.286 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:17:03.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f67096df-a981-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 12:17:03.889: INFO: Waiting up to 5m0s for pod "pod-secrets-f670faf9-a981-11ea-978f-0242ac110018" in namespace "e2e-tests-secrets-gf4cg" to be "success or failure" Jun 8 12:17:03.904: INFO: Pod "pod-secrets-f670faf9-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.433165ms Jun 8 12:17:05.908: INFO: Pod "pod-secrets-f670faf9-a981-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018066934s Jun 8 12:17:07.912: INFO: Pod "pod-secrets-f670faf9-a981-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.022332321s Jun 8 12:17:09.915: INFO: Pod "pod-secrets-f670faf9-a981-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025983026s STEP: Saw pod success Jun 8 12:17:09.915: INFO: Pod "pod-secrets-f670faf9-a981-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:17:09.918: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f670faf9-a981-11ea-978f-0242ac110018 container secret-env-test: STEP: delete the pod Jun 8 12:17:09.934: INFO: Waiting for pod pod-secrets-f670faf9-a981-11ea-978f-0242ac110018 to disappear Jun 8 12:17:09.976: INFO: Pod pod-secrets-f670faf9-a981-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:17:09.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gf4cg" for this suite. Jun 8 12:17:16.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:17:16.081: INFO: namespace: e2e-tests-secrets-gf4cg, resource: bindings, ignored listing per whitelist Jun 8 12:17:16.089: INFO: namespace e2e-tests-secrets-gf4cg deletion completed in 6.110216921s • [SLOW TEST:12.316 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:17:16.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 8 12:17:16.244: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 8 12:17:16.253: INFO: Waiting for terminating namespaces to be deleted... Jun 8 12:17:16.255: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 8 12:17:16.259: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 8 12:17:16.259: INFO: Container kube-proxy ready: true, restart count 0 Jun 8 12:17:16.259: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 8 12:17:16.259: INFO: Container kindnet-cni ready: true, restart count 0 Jun 8 12:17:16.259: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 8 12:17:16.259: INFO: Container coredns ready: true, restart count 0 Jun 8 12:17:16.259: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 8 12:17:16.263: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 8 12:17:16.263: INFO: Container kube-proxy ready: true, restart count 0 Jun 8 12:17:16.263: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 8 12:17:16.263: INFO: Container kindnet-cni ready: true, restart count 0 Jun 8 12:17:16.263: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 8 12:17:16.263: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161690b7893fe74c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:17:17.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-tplmv" for this suite. Jun 8 12:17:23.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:17:23.338: INFO: namespace: e2e-tests-sched-pred-tplmv, resource: bindings, ignored listing per whitelist Jun 8 12:17:23.382: INFO: namespace e2e-tests-sched-pred-tplmv deletion completed in 6.096869658s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.293 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:17:23.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 8 12:17:23.500: INFO: Waiting up to 5m0s for pod "downward-api-022388f3-a982-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-2mf49" to be "success or failure" Jun 8 12:17:23.515: INFO: Pod "downward-api-022388f3-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.721301ms Jun 8 12:17:25.520: INFO: Pod "downward-api-022388f3-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019582392s Jun 8 12:17:27.523: INFO: Pod "downward-api-022388f3-a982-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023266985s STEP: Saw pod success Jun 8 12:17:27.523: INFO: Pod "downward-api-022388f3-a982-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:17:27.527: INFO: Trying to get logs from node hunter-worker2 pod downward-api-022388f3-a982-11ea-978f-0242ac110018 container dapi-container: STEP: delete the pod Jun 8 12:17:27.589: INFO: Waiting for pod downward-api-022388f3-a982-11ea-978f-0242ac110018 to disappear Jun 8 12:17:27.741: INFO: Pod downward-api-022388f3-a982-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:17:27.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2mf49" for this suite. Jun 8 12:17:33.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:17:33.900: INFO: namespace: e2e-tests-downward-api-2mf49, resource: bindings, ignored listing per whitelist Jun 8 12:17:33.907: INFO: namespace e2e-tests-downward-api-2mf49 deletion completed in 6.151160404s • [SLOW TEST:10.524 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:17:33.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 8 12:17:40.535: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0866ff88-a982-11ea-978f-0242ac110018" Jun 8 12:17:40.535: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0866ff88-a982-11ea-978f-0242ac110018" in namespace "e2e-tests-pods-x9trd" to be "terminated due to deadline exceeded" Jun 8 12:17:40.693: INFO: Pod "pod-update-activedeadlineseconds-0866ff88-a982-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 157.789258ms Jun 8 12:17:42.837: INFO: Pod "pod-update-activedeadlineseconds-0866ff88-a982-11ea-978f-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.301584867s Jun 8 12:17:42.837: INFO: Pod "pod-update-activedeadlineseconds-0866ff88-a982-11ea-978f-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:17:42.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-x9trd" for this suite. Jun 8 12:17:48.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:17:48.895: INFO: namespace: e2e-tests-pods-x9trd, resource: bindings, ignored listing per whitelist Jun 8 12:17:48.944: INFO: namespace e2e-tests-pods-x9trd deletion completed in 6.104729093s • [SLOW TEST:15.038 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:17:48.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-875m8 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-875m8 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-875m8 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-875m8 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-875m8 Jun 8 12:17:53.174: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-875m8, name: ss-0, uid: 1349e9e0-a982-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Jun 8 12:18:01.247: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-875m8, name: ss-0, uid: 1349e9e0-a982-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 8 12:18:01.260: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-875m8, name: ss-0, uid: 1349e9e0-a982-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 8 12:18:01.283: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-875m8 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-875m8 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-875m8 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 8 12:18:05.418: INFO: Deleting all statefulset in ns e2e-tests-statefulset-875m8 Jun 8 12:18:05.420: INFO: Scaling statefulset ss to 0 Jun 8 12:18:25.458: INFO: Waiting for statefulset status.replicas updated to 0 Jun 8 12:18:25.461: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:18:25.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-875m8" for this suite. Jun 8 12:18:31.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:18:31.544: INFO: namespace: e2e-tests-statefulset-875m8, resource: bindings, ignored listing per whitelist Jun 8 12:18:31.559: INFO: namespace e2e-tests-statefulset-875m8 deletion completed in 6.08079386s • [SLOW TEST:42.615 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:18:31.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-v5xc5 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jun 8 12:18:31.672: INFO: Found 0 stateful pods, waiting for 3 Jun 8 12:18:41.678: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:18:41.678: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:18:41.678: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 8 12:18:51.678: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:18:51.678: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:18:51.678: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 8 12:18:51.747: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 8 12:18:51.844: INFO: Updating stateful set ss2 Jun 8 12:18:51.859: INFO: Waiting for Pod e2e-tests-statefulset-v5xc5/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 8 12:19:02.545: INFO: Found 2 stateful pods, waiting for 3 Jun 8 12:19:12.549: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:19:12.549: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:19:12.550: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 8 12:19:12.595: INFO: Updating stateful set ss2 Jun 8 12:19:12.626: INFO: Waiting for Pod e2e-tests-statefulset-v5xc5/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 8 12:19:22.671: INFO: Updating stateful set ss2 Jun 8 12:19:22.693: INFO: Waiting for StatefulSet e2e-tests-statefulset-v5xc5/ss2 to complete update Jun 8 12:19:22.693: INFO: Waiting for Pod e2e-tests-statefulset-v5xc5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 8 12:19:32.699: INFO: Waiting for StatefulSet e2e-tests-statefulset-v5xc5/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 8 12:19:42.702: INFO: Deleting all statefulset in ns e2e-tests-statefulset-v5xc5 Jun 8 12:19:42.749: INFO: Scaling statefulset ss2 to 0 Jun 8 12:20:02.780: INFO: Waiting for statefulset status.replicas updated to 0 Jun 8 12:20:02.783: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:20:02.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-v5xc5" for this suite. Jun 8 12:20:08.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:20:08.934: INFO: namespace: e2e-tests-statefulset-v5xc5, resource: bindings, ignored listing per whitelist Jun 8 12:20:08.952: INFO: namespace e2e-tests-statefulset-v5xc5 deletion completed in 6.151869429s • [SLOW TEST:97.392 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:20:08.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-64d1c85c-a982-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 12:20:09.065: INFO: Waiting up to 5m0s for pod "pod-configmaps-64d27d6e-a982-11ea-978f-0242ac110018" in namespace "e2e-tests-configmap-jlzhw" to be "success or failure" Jun 8 12:20:09.079: INFO: Pod "pod-configmaps-64d27d6e-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.546765ms Jun 8 12:20:11.083: INFO: Pod "pod-configmaps-64d27d6e-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018527295s Jun 8 12:20:13.087: INFO: Pod "pod-configmaps-64d27d6e-a982-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.022426376s Jun 8 12:20:15.091: INFO: Pod "pod-configmaps-64d27d6e-a982-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02690857s STEP: Saw pod success Jun 8 12:20:15.092: INFO: Pod "pod-configmaps-64d27d6e-a982-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:20:15.095: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-64d27d6e-a982-11ea-978f-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 8 12:20:15.158: INFO: Waiting for pod pod-configmaps-64d27d6e-a982-11ea-978f-0242ac110018 to disappear Jun 8 12:20:15.166: INFO: Pod pod-configmaps-64d27d6e-a982-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:20:15.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jlzhw" for this suite. Jun 8 12:20:21.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:20:21.238: INFO: namespace: e2e-tests-configmap-jlzhw, resource: bindings, ignored listing per whitelist Jun 8 12:20:21.253: INFO: namespace e2e-tests-configmap-jlzhw deletion completed in 6.084280354s • [SLOW TEST:12.301 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:20:21.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jun 8 12:20:21.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-hhqtz run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 8 12:20:24.644: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0608 12:20:24.575167 2821 log.go:172] (0xc000138630) (0xc0005a9040) Create stream\nI0608 12:20:24.575226 2821 log.go:172] (0xc000138630) (0xc0005a9040) Stream added, broadcasting: 1\nI0608 12:20:24.577830 2821 log.go:172] (0xc000138630) Reply frame received for 1\nI0608 12:20:24.577876 2821 log.go:172] (0xc000138630) (0xc0005a90e0) Create stream\nI0608 12:20:24.577892 2821 log.go:172] (0xc000138630) (0xc0005a90e0) Stream added, broadcasting: 3\nI0608 12:20:24.578736 2821 log.go:172] (0xc000138630) Reply frame received for 3\nI0608 12:20:24.578769 2821 log.go:172] (0xc000138630) (0xc0005f8aa0) Create stream\nI0608 12:20:24.578780 2821 log.go:172] (0xc000138630) (0xc0005f8aa0) Stream added, broadcasting: 5\nI0608 12:20:24.579600 2821 log.go:172] (0xc000138630) Reply frame received for 5\nI0608 12:20:24.579642 2821 log.go:172] (0xc000138630) (0xc0004f6000) Create stream\nI0608 12:20:24.579652 2821 log.go:172] (0xc000138630) (0xc0004f6000) Stream added, broadcasting: 7\nI0608 12:20:24.580430 2821 log.go:172] (0xc000138630) Reply frame received for 7\nI0608 12:20:24.580580 2821 log.go:172] (0xc0005a90e0) (3) Writing data frame\nI0608 12:20:24.580756 2821 log.go:172] (0xc0005a90e0) (3) Writing data frame\nI0608 12:20:24.581771 2821 log.go:172] (0xc000138630) Data frame received for 5\nI0608 12:20:24.581797 2821 log.go:172] (0xc0005f8aa0) (5) Data frame handling\nI0608 12:20:24.581816 2821 log.go:172] (0xc0005f8aa0) (5) Data frame sent\nI0608 12:20:24.582392 2821 log.go:172] (0xc000138630) Data frame received for 5\nI0608 12:20:24.582406 2821 log.go:172] (0xc0005f8aa0) (5) Data frame handling\nI0608 12:20:24.582414 2821 log.go:172] (0xc0005f8aa0) (5) Data frame sent\nI0608 12:20:24.615612 2821 log.go:172] (0xc000138630) Data frame received for 7\nI0608 12:20:24.615668 2821 log.go:172] (0xc0004f6000) (7) Data frame handling\nI0608 12:20:24.615695 2821 log.go:172] (0xc000138630) Data frame received for 5\nI0608 12:20:24.615705 2821 log.go:172] (0xc0005f8aa0) (5) Data frame handling\nI0608 12:20:24.616027 2821 log.go:172] (0xc000138630) Data frame received for 1\nI0608 12:20:24.616054 2821 log.go:172] (0xc0005a9040) (1) Data frame handling\nI0608 12:20:24.616069 2821 log.go:172] (0xc0005a9040) (1) Data frame sent\nI0608 12:20:24.616087 2821 log.go:172] (0xc000138630) (0xc0005a9040) Stream removed, broadcasting: 1\nI0608 12:20:24.616264 2821 log.go:172] (0xc000138630) (0xc0005a9040) Stream removed, broadcasting: 1\nI0608 12:20:24.616288 2821 log.go:172] (0xc000138630) (0xc0005a90e0) Stream removed, broadcasting: 3\nI0608 12:20:24.616308 2821 log.go:172] (0xc000138630) (0xc0005f8aa0) Stream removed, broadcasting: 5\nI0608 12:20:24.616460 2821 log.go:172] (0xc000138630) Go away received\nI0608 12:20:24.616654 2821 log.go:172] (0xc000138630) (0xc0004f6000) Stream removed, broadcasting: 7\n" Jun 8 12:20:24.645: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:20:26.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hhqtz" for this suite. Jun 8 12:20:32.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:20:32.712: INFO: namespace: e2e-tests-kubectl-hhqtz, resource: bindings, ignored listing per whitelist Jun 8 12:20:32.797: INFO: namespace e2e-tests-kubectl-hhqtz deletion completed in 6.143790431s • [SLOW TEST:11.544 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:20:32.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jun 8 12:20:32.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 8 12:20:33.051: INFO: stderr: "" Jun 8 12:20:33.051: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:20:33.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lsfqx" for this suite. Jun 8 12:20:39.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:20:39.163: INFO: namespace: e2e-tests-kubectl-lsfqx, resource: bindings, ignored listing per whitelist Jun 8 12:20:39.171: INFO: namespace e2e-tests-kubectl-lsfqx deletion completed in 6.115867339s • [SLOW TEST:6.374 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:20:39.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-76d12170-a982-11ea-978f-0242ac110018 Jun 8 12:20:39.279: INFO: Pod name my-hostname-basic-76d12170-a982-11ea-978f-0242ac110018: Found 0 pods out of 1 Jun 8 12:20:44.284: INFO: Pod name my-hostname-basic-76d12170-a982-11ea-978f-0242ac110018: Found 1 pods out of 1 Jun 8 12:20:44.284: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-76d12170-a982-11ea-978f-0242ac110018" are running Jun 8 12:20:44.286: INFO: Pod "my-hostname-basic-76d12170-a982-11ea-978f-0242ac110018-tkjdf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-08 12:20:39 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-08 12:20:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-08 12:20:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-08 12:20:39 +0000 UTC Reason: Message:}]) Jun 8 12:20:44.286: INFO: Trying to dial the pod Jun 8 12:20:49.298: INFO: Controller my-hostname-basic-76d12170-a982-11ea-978f-0242ac110018: Got expected result from replica 1 [my-hostname-basic-76d12170-a982-11ea-978f-0242ac110018-tkjdf]: "my-hostname-basic-76d12170-a982-11ea-978f-0242ac110018-tkjdf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:20:49.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-p7x26" for this suite. Jun 8 12:20:55.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:20:55.360: INFO: namespace: e2e-tests-replication-controller-p7x26, resource: bindings, ignored listing per whitelist Jun 8 12:20:55.389: INFO: namespace e2e-tests-replication-controller-p7x26 deletion completed in 6.087711797s • [SLOW TEST:16.217 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:20:55.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jun 8 12:21:01.587: INFO: Pod pod-hostip-807dc930-a982-11ea-978f-0242ac110018 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:21:01.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-tgxw9" for this suite. Jun 8 12:21:23.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:21:23.645: INFO: namespace: e2e-tests-pods-tgxw9, resource: bindings, ignored listing per whitelist Jun 8 12:21:23.682: INFO: namespace e2e-tests-pods-tgxw9 deletion completed in 22.091599193s • [SLOW TEST:28.293 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:21:23.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-l64kb/configmap-test-915a4c07-a982-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 12:21:23.786: INFO: Waiting up to 5m0s for pod "pod-configmaps-915be3f8-a982-11ea-978f-0242ac110018" in namespace "e2e-tests-configmap-l64kb" to be "success or failure" Jun 8 12:21:23.802: INFO: Pod "pod-configmaps-915be3f8-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.085214ms Jun 8 12:21:25.808: INFO: Pod "pod-configmaps-915be3f8-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021579315s Jun 8 12:21:27.812: INFO: Pod "pod-configmaps-915be3f8-a982-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025947746s STEP: Saw pod success Jun 8 12:21:27.812: INFO: Pod "pod-configmaps-915be3f8-a982-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:21:27.815: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-915be3f8-a982-11ea-978f-0242ac110018 container env-test: STEP: delete the pod Jun 8 12:21:27.869: INFO: Waiting for pod pod-configmaps-915be3f8-a982-11ea-978f-0242ac110018 to disappear Jun 8 12:21:27.874: INFO: Pod pod-configmaps-915be3f8-a982-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:21:27.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l64kb" for this suite. Jun 8 12:21:33.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:21:34.002: INFO: namespace: e2e-tests-configmap-l64kb, resource: bindings, ignored listing per whitelist Jun 8 12:21:34.051: INFO: namespace e2e-tests-configmap-l64kb deletion completed in 6.174186426s • [SLOW TEST:10.369 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:21:34.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 12:21:34.230: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.422506ms) Jun 8 12:21:34.234: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.755881ms) Jun 8 12:21:34.237: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.652794ms) Jun 8 12:21:34.241: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.007701ms) Jun 8 12:21:34.246: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.172021ms) Jun 8 12:21:34.248: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.761887ms) Jun 8 12:21:34.251: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.83867ms) Jun 8 12:21:34.254: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.681741ms) Jun 8 12:21:34.257: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.014959ms) Jun 8 12:21:34.260: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.774349ms) Jun 8 12:21:34.263: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.109299ms) Jun 8 12:21:34.266: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.224284ms) Jun 8 12:21:34.269: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.27263ms) Jun 8 12:21:34.273: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.501772ms) Jun 8 12:21:34.276: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.198317ms) Jun 8 12:21:34.279: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.185672ms) Jun 8 12:21:34.283: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.988829ms) Jun 8 12:21:34.287: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.455906ms) Jun 8 12:21:34.291: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.71476ms) Jun 8 12:21:34.294: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.685008ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:21:34.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-264nh" for this suite. Jun 8 12:21:40.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:21:40.370: INFO: namespace: e2e-tests-proxy-264nh, resource: bindings, ignored listing per whitelist Jun 8 12:21:40.392: INFO: namespace e2e-tests-proxy-264nh deletion completed in 6.093783059s • [SLOW TEST:6.341 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:21:40.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 8 12:21:40.517: INFO: Waiting up to 5m0s for pod "pod-9b5327eb-a982-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-nkfpx" to be "success or failure" Jun 8 12:21:40.527: INFO: Pod "pod-9b5327eb-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.00677ms Jun 8 12:21:42.649: INFO: Pod "pod-9b5327eb-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131380458s Jun 8 12:21:44.654: INFO: Pod "pod-9b5327eb-a982-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.13610607s Jun 8 12:21:46.658: INFO: Pod "pod-9b5327eb-a982-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140295276s STEP: Saw pod success Jun 8 12:21:46.658: INFO: Pod "pod-9b5327eb-a982-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:21:46.661: INFO: Trying to get logs from node hunter-worker2 pod pod-9b5327eb-a982-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 12:21:46.783: INFO: Waiting for pod pod-9b5327eb-a982-11ea-978f-0242ac110018 to disappear Jun 8 12:21:46.796: INFO: Pod pod-9b5327eb-a982-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:21:46.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nkfpx" for this suite. Jun 8 12:21:52.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:21:52.896: INFO: namespace: e2e-tests-emptydir-nkfpx, resource: bindings, ignored listing per whitelist Jun 8 12:21:52.950: INFO: namespace e2e-tests-emptydir-nkfpx deletion completed in 6.150912663s • [SLOW TEST:12.558 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:21:52.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 8 12:21:53.070: INFO: Waiting up to 5m0s for pod "pod-a2cd5a78-a982-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-lmwlq" to be "success or failure" Jun 8 12:21:53.078: INFO: Pod "pod-a2cd5a78-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.566932ms Jun 8 12:21:55.158: INFO: Pod "pod-a2cd5a78-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087764805s Jun 8 12:21:57.445: INFO: Pod "pod-a2cd5a78-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37506282s Jun 8 12:21:59.449: INFO: Pod "pod-a2cd5a78-a982-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.379099017s Jun 8 12:22:01.627: INFO: Pod "pod-a2cd5a78-a982-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.55623434s STEP: Saw pod success Jun 8 12:22:01.627: INFO: Pod "pod-a2cd5a78-a982-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:22:01.629: INFO: Trying to get logs from node hunter-worker pod pod-a2cd5a78-a982-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 12:22:01.679: INFO: Waiting for pod pod-a2cd5a78-a982-11ea-978f-0242ac110018 to disappear Jun 8 12:22:01.835: INFO: Pod pod-a2cd5a78-a982-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:22:01.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lmwlq" for this suite. Jun 8 12:22:07.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:22:07.969: INFO: namespace: e2e-tests-emptydir-lmwlq, resource: bindings, ignored listing per whitelist Jun 8 12:22:07.994: INFO: namespace e2e-tests-emptydir-lmwlq deletion completed in 6.15473548s • [SLOW TEST:15.044 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:22:07.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cml8s STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 8 12:22:08.104: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 8 12:22:38.218: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.193 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-cml8s PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:22:38.218: INFO: >>> kubeConfig: /root/.kube/config I0608 12:22:38.247212 6 log.go:172] (0xc001b162c0) (0xc0024c14a0) Create stream I0608 12:22:38.247250 6 log.go:172] (0xc001b162c0) (0xc0024c14a0) Stream added, broadcasting: 1 I0608 12:22:38.248600 6 log.go:172] (0xc001b162c0) Reply frame received for 1 I0608 12:22:38.248644 6 log.go:172] (0xc001b162c0) (0xc002201f40) Create stream I0608 12:22:38.248662 6 log.go:172] (0xc001b162c0) (0xc002201f40) Stream added, broadcasting: 3 I0608 12:22:38.249660 6 log.go:172] (0xc001b162c0) Reply frame received for 3 I0608 12:22:38.249707 6 log.go:172] (0xc001b162c0) (0xc0024c1540) Create stream I0608 12:22:38.249719 6 log.go:172] (0xc001b162c0) (0xc0024c1540) Stream added, broadcasting: 5 I0608 12:22:38.251089 6 log.go:172] (0xc001b162c0) Reply frame received for 5 I0608 12:22:39.347842 6 log.go:172] (0xc001b162c0) Data frame received for 3 I0608 12:22:39.347872 6 log.go:172] (0xc002201f40) (3) Data frame handling I0608 12:22:39.347887 6 log.go:172] (0xc002201f40) (3) Data frame sent I0608 12:22:39.348217 6 log.go:172] (0xc001b162c0) Data frame received for 5 I0608 12:22:39.348261 6 log.go:172] (0xc0024c1540) (5) Data frame handling I0608 12:22:39.348477 6 log.go:172] (0xc001b162c0) Data frame received for 3 I0608 12:22:39.348510 6 log.go:172] (0xc002201f40) (3) Data frame handling I0608 12:22:39.350265 6 log.go:172] (0xc001b162c0) Data frame received for 1 I0608 12:22:39.350299 6 log.go:172] (0xc0024c14a0) (1) Data frame handling I0608 12:22:39.350317 6 log.go:172] (0xc0024c14a0) (1) Data frame sent I0608 12:22:39.350332 6 log.go:172] (0xc001b162c0) (0xc0024c14a0) Stream removed, broadcasting: 1 I0608 12:22:39.350371 6 log.go:172] (0xc001b162c0) Go away received I0608 12:22:39.350428 6 log.go:172] (0xc001b162c0) (0xc0024c14a0) Stream removed, broadcasting: 1 I0608 12:22:39.350452 6 log.go:172] (0xc001b162c0) (0xc002201f40) Stream removed, broadcasting: 3 I0608 12:22:39.350474 6 log.go:172] (0xc001b162c0) (0xc0024c1540) Stream removed, broadcasting: 5 Jun 8 12:22:39.350: INFO: Found all expected endpoints: [netserver-0] Jun 8 12:22:39.353: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.38 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-cml8s PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:22:39.353: INFO: >>> kubeConfig: /root/.kube/config I0608 12:22:39.378608 6 log.go:172] (0xc001b16790) (0xc0024c19a0) Create stream I0608 12:22:39.378635 6 log.go:172] (0xc001b16790) (0xc0024c19a0) Stream added, broadcasting: 1 I0608 12:22:39.380429 6 log.go:172] (0xc001b16790) Reply frame received for 1 I0608 12:22:39.380471 6 log.go:172] (0xc001b16790) (0xc001bb4000) Create stream I0608 12:22:39.380485 6 log.go:172] (0xc001b16790) (0xc001bb4000) Stream added, broadcasting: 3 I0608 12:22:39.381689 6 log.go:172] (0xc001b16790) Reply frame received for 3 I0608 12:22:39.381747 6 log.go:172] (0xc001b16790) (0xc002758500) Create stream I0608 12:22:39.381763 6 log.go:172] (0xc001b16790) (0xc002758500) Stream added, broadcasting: 5 I0608 12:22:39.382671 6 log.go:172] (0xc001b16790) Reply frame received for 5 I0608 12:22:40.477545 6 log.go:172] (0xc001b16790) Data frame received for 3 I0608 12:22:40.477608 6 log.go:172] (0xc001bb4000) (3) Data frame handling I0608 12:22:40.477655 6 log.go:172] (0xc001bb4000) (3) Data frame sent I0608 12:22:40.477992 6 log.go:172] (0xc001b16790) Data frame received for 5 I0608 12:22:40.478070 6 log.go:172] (0xc002758500) (5) Data frame handling I0608 12:22:40.478339 6 log.go:172] (0xc001b16790) Data frame received for 3 I0608 12:22:40.478380 6 log.go:172] (0xc001bb4000) (3) Data frame handling I0608 12:22:40.480268 6 log.go:172] (0xc001b16790) Data frame received for 1 I0608 12:22:40.480294 6 log.go:172] (0xc0024c19a0) (1) Data frame handling I0608 12:22:40.480321 6 log.go:172] (0xc0024c19a0) (1) Data frame sent I0608 12:22:40.480346 6 log.go:172] (0xc001b16790) (0xc0024c19a0) Stream removed, broadcasting: 1 I0608 12:22:40.480372 6 log.go:172] (0xc001b16790) Go away received I0608 12:22:40.480584 6 log.go:172] (0xc001b16790) (0xc0024c19a0) Stream removed, broadcasting: 1 I0608 12:22:40.480621 6 log.go:172] (0xc001b16790) (0xc001bb4000) Stream removed, broadcasting: 3 I0608 12:22:40.480641 6 log.go:172] (0xc001b16790) (0xc002758500) Stream removed, broadcasting: 5 Jun 8 12:22:40.480: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:22:40.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-cml8s" for this suite. Jun 8 12:23:02.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:23:02.528: INFO: namespace: e2e-tests-pod-network-test-cml8s, resource: bindings, ignored listing per whitelist Jun 8 12:23:02.561: INFO: namespace e2e-tests-pod-network-test-cml8s deletion completed in 22.075784145s • [SLOW TEST:54.566 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:23:02.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:23:09.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-jpfcj" for this suite. Jun 8 12:23:49.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:23:49.579: INFO: namespace: e2e-tests-kubelet-test-jpfcj, resource: bindings, ignored listing per whitelist Jun 8 12:23:49.611: INFO: namespace e2e-tests-kubelet-test-jpfcj deletion completed in 40.151395789s • [SLOW TEST:47.050 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:23:49.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 12:23:53.818: INFO: Waiting up to 5m0s for pod "client-envvars-eac7fa3e-a982-11ea-978f-0242ac110018" in namespace "e2e-tests-pods-ccxtv" to be "success or failure" Jun 8 12:23:53.829: INFO: Pod "client-envvars-eac7fa3e-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.932862ms Jun 8 12:23:55.833: INFO: Pod "client-envvars-eac7fa3e-a982-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015242766s Jun 8 12:23:57.838: INFO: Pod "client-envvars-eac7fa3e-a982-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.019910036s Jun 8 12:23:59.842: INFO: Pod "client-envvars-eac7fa3e-a982-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024111904s STEP: Saw pod success Jun 8 12:23:59.842: INFO: Pod "client-envvars-eac7fa3e-a982-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:23:59.844: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-eac7fa3e-a982-11ea-978f-0242ac110018 container env3cont: STEP: delete the pod Jun 8 12:23:59.879: INFO: Waiting for pod client-envvars-eac7fa3e-a982-11ea-978f-0242ac110018 to disappear Jun 8 12:23:59.951: INFO: Pod client-envvars-eac7fa3e-a982-11ea-978f-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:23:59.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ccxtv" for this suite. Jun 8 12:24:39.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:24:40.016: INFO: namespace: e2e-tests-pods-ccxtv, resource: bindings, ignored listing per whitelist Jun 8 12:24:40.069: INFO: namespace e2e-tests-pods-ccxtv deletion completed in 40.112151796s • [SLOW TEST:50.458 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:24:40.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-068c33f1-a983-11ea-978f-0242ac110018 STEP: Creating secret with name s-test-opt-upd-068c346b-a983-11ea-978f-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-068c33f1-a983-11ea-978f-0242ac110018 STEP: Updating secret s-test-opt-upd-068c346b-a983-11ea-978f-0242ac110018 STEP: Creating secret with name s-test-opt-create-068c3495-a983-11ea-978f-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:24:50.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qg5rp" for this suite. Jun 8 12:25:12.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:25:12.682: INFO: namespace: e2e-tests-projected-qg5rp, resource: bindings, ignored listing per whitelist Jun 8 12:25:12.741: INFO: namespace e2e-tests-projected-qg5rp deletion completed in 22.099278043s • [SLOW TEST:32.671 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:25:12.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 12:25:12.866: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.044401ms) Jun 8 12:25:12.870: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.124218ms) Jun 8 12:25:12.899: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 28.855741ms) Jun 8 12:25:12.903: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.217061ms) Jun 8 12:25:12.906: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.153839ms) Jun 8 12:25:12.914: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 8.276ms) Jun 8 12:25:12.918: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.270569ms) Jun 8 12:25:12.922: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.66763ms) Jun 8 12:25:12.925: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.939317ms) Jun 8 12:25:12.929: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.299177ms) Jun 8 12:25:12.933: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.055293ms) Jun 8 12:25:12.936: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.180928ms) Jun 8 12:25:12.940: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.404064ms) Jun 8 12:25:12.943: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.435999ms) Jun 8 12:25:12.947: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.738515ms) Jun 8 12:25:12.950: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.219996ms) Jun 8 12:25:12.954: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.908606ms) Jun 8 12:25:12.958: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.796465ms) Jun 8 12:25:12.962: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.90923ms) Jun 8 12:25:12.965: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.522721ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:25:12.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-q5qr7" for this suite. Jun 8 12:25:19.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:25:19.051: INFO: namespace: e2e-tests-proxy-q5qr7, resource: bindings, ignored listing per whitelist Jun 8 12:25:19.086: INFO: namespace e2e-tests-proxy-q5qr7 deletion completed in 6.116964768s • [SLOW TEST:6.345 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:25:19.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 8 12:25:29.557: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 8 12:25:29.578: INFO: Pod pod-with-prestop-http-hook still exists Jun 8 12:25:31.578: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 8 12:25:31.583: INFO: Pod pod-with-prestop-http-hook still exists Jun 8 12:25:33.578: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 8 12:25:33.583: INFO: Pod pod-with-prestop-http-hook still exists Jun 8 12:25:35.578: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 8 12:25:35.583: INFO: Pod pod-with-prestop-http-hook still exists Jun 8 12:25:37.578: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 8 12:25:37.582: INFO: Pod pod-with-prestop-http-hook still exists Jun 8 12:25:39.578: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 8 12:25:39.583: INFO: Pod pod-with-prestop-http-hook still exists Jun 8 12:25:41.578: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 8 12:25:41.582: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:25:41.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zp9hh" for this suite. Jun 8 12:26:03.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:26:03.649: INFO: namespace: e2e-tests-container-lifecycle-hook-zp9hh, resource: bindings, ignored listing per whitelist Jun 8 12:26:03.686: INFO: namespace e2e-tests-container-lifecycle-hook-zp9hh deletion completed in 22.091669372s • [SLOW TEST:44.599 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:26:03.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jun 8 12:26:03.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 8 12:26:06.359: INFO: stderr: "" Jun 8 12:26:06.359: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:26:06.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gbdmz" for this suite. Jun 8 12:26:12.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:26:12.441: INFO: namespace: e2e-tests-kubectl-gbdmz, resource: bindings, ignored listing per whitelist Jun 8 12:26:12.464: INFO: namespace e2e-tests-kubectl-gbdmz deletion completed in 6.100606422s • [SLOW TEST:8.778 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:26:12.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 12:26:12.617: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jun 8 12:26:12.626: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-s8rw4/daemonsets","resourceVersion":"14875532"},"items":null} Jun 8 12:26:12.629: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-s8rw4/pods","resourceVersion":"14875532"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:26:12.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-s8rw4" for this suite. Jun 8 12:26:18.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:26:18.711: INFO: namespace: e2e-tests-daemonsets-s8rw4, resource: bindings, ignored listing per whitelist Jun 8 12:26:18.810: INFO: namespace e2e-tests-daemonsets-s8rw4 deletion completed in 6.169562672s S [SKIPPING] [6.346 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 12:26:12.617: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:26:18.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 12:26:18.918: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41444563-a983-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-wpxhz" to be "success or failure" Jun 8 12:26:18.922: INFO: Pod "downwardapi-volume-41444563-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.700136ms Jun 8 12:26:20.925: INFO: Pod "downwardapi-volume-41444563-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007454747s Jun 8 12:26:22.930: INFO: Pod "downwardapi-volume-41444563-a983-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011922392s STEP: Saw pod success Jun 8 12:26:22.930: INFO: Pod "downwardapi-volume-41444563-a983-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:26:22.933: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-41444563-a983-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 12:26:23.000: INFO: Waiting for pod downwardapi-volume-41444563-a983-11ea-978f-0242ac110018 to disappear Jun 8 12:26:23.003: INFO: Pod downwardapi-volume-41444563-a983-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:26:23.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wpxhz" for this suite. Jun 8 12:26:31.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:26:31.055: INFO: namespace: e2e-tests-downward-api-wpxhz, resource: bindings, ignored listing per whitelist Jun 8 12:26:31.121: INFO: namespace e2e-tests-downward-api-wpxhz deletion completed in 8.114862107s • [SLOW TEST:12.311 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:26:31.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 8 12:26:37.776: INFO: Successfully updated pod "labelsupdate489db372-a983-11ea-978f-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:26:39.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-57s9x" for this suite. Jun 8 12:27:01.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:27:01.929: INFO: namespace: e2e-tests-projected-57s9x, resource: bindings, ignored listing per whitelist Jun 8 12:27:01.942: INFO: namespace e2e-tests-projected-57s9x deletion completed in 22.124057411s • [SLOW TEST:30.821 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:27:01.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:27:02.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-t84xn" for this suite. Jun 8 12:27:08.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:27:08.455: INFO: namespace: e2e-tests-kubelet-test-t84xn, resource: bindings, ignored listing per whitelist Jun 8 12:27:08.459: INFO: namespace e2e-tests-kubelet-test-t84xn deletion completed in 6.087082891s • [SLOW TEST:6.517 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:27:08.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 8 12:27:13.182: INFO: Successfully updated pod "labelsupdate5ee06fb3-a983-11ea-978f-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:27:15.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bvjfg" for this suite. Jun 8 12:27:37.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:27:37.294: INFO: namespace: e2e-tests-downward-api-bvjfg, resource: bindings, ignored listing per whitelist Jun 8 12:27:37.327: INFO: namespace e2e-tests-downward-api-bvjfg deletion completed in 22.098209626s • [SLOW TEST:28.868 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:27:37.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-700db2de-a983-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 12:27:37.463: INFO: Waiting up to 5m0s for pod "pod-secrets-70130ea7-a983-11ea-978f-0242ac110018" in namespace "e2e-tests-secrets-hwzjd" to be "success or failure" Jun 8 12:27:37.486: INFO: Pod "pod-secrets-70130ea7-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.016792ms Jun 8 12:27:39.490: INFO: Pod "pod-secrets-70130ea7-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026749247s Jun 8 12:27:41.496: INFO: Pod "pod-secrets-70130ea7-a983-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032368103s STEP: Saw pod success Jun 8 12:27:41.496: INFO: Pod "pod-secrets-70130ea7-a983-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:27:41.498: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-70130ea7-a983-11ea-978f-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 8 12:27:41.542: INFO: Waiting for pod pod-secrets-70130ea7-a983-11ea-978f-0242ac110018 to disappear Jun 8 12:27:41.549: INFO: Pod pod-secrets-70130ea7-a983-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:27:41.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hwzjd" for this suite. Jun 8 12:27:47.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:27:47.610: INFO: namespace: e2e-tests-secrets-hwzjd, resource: bindings, ignored listing per whitelist Jun 8 12:27:47.639: INFO: namespace e2e-tests-secrets-hwzjd deletion completed in 6.087316204s • [SLOW TEST:10.311 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:27:47.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-763a3cad-a983-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 12:27:47.791: INFO: Waiting up to 5m0s for pod "pod-secrets-763c8ed8-a983-11ea-978f-0242ac110018" in namespace "e2e-tests-secrets-nf4pb" to be "success or failure" Jun 8 12:27:47.820: INFO: Pod "pod-secrets-763c8ed8-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.13828ms Jun 8 12:27:49.824: INFO: Pod "pod-secrets-763c8ed8-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032380046s Jun 8 12:27:51.828: INFO: Pod "pod-secrets-763c8ed8-a983-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036174619s STEP: Saw pod success Jun 8 12:27:51.828: INFO: Pod "pod-secrets-763c8ed8-a983-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:27:51.831: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-763c8ed8-a983-11ea-978f-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 8 12:27:51.923: INFO: Waiting for pod pod-secrets-763c8ed8-a983-11ea-978f-0242ac110018 to disappear Jun 8 12:27:51.932: INFO: Pod pod-secrets-763c8ed8-a983-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:27:51.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nf4pb" for this suite. Jun 8 12:27:58.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:27:58.052: INFO: namespace: e2e-tests-secrets-nf4pb, resource: bindings, ignored listing per whitelist Jun 8 12:27:58.081: INFO: namespace e2e-tests-secrets-nf4pb deletion completed in 6.144641388s • [SLOW TEST:10.441 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:27:58.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-kj8n STEP: Creating a pod to test atomic-volume-subpath Jun 8 12:27:58.195: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kj8n" in namespace "e2e-tests-subpath-b89th" to be "success or failure" Jun 8 12:27:58.242: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Pending", Reason="", readiness=false. Elapsed: 46.481321ms Jun 8 12:28:00.246: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050330442s Jun 8 12:28:02.255: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059418553s Jun 8 12:28:04.260: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064416847s Jun 8 12:28:06.265: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Running", Reason="", readiness=false. Elapsed: 8.069690716s Jun 8 12:28:08.269: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Running", Reason="", readiness=false. Elapsed: 10.073724841s Jun 8 12:28:10.273: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Running", Reason="", readiness=false. Elapsed: 12.07762715s Jun 8 12:28:12.277: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Running", Reason="", readiness=false. Elapsed: 14.082135437s Jun 8 12:28:14.280: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Running", Reason="", readiness=false. Elapsed: 16.08528905s Jun 8 12:28:16.285: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Running", Reason="", readiness=false. Elapsed: 18.090216075s Jun 8 12:28:18.290: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Running", Reason="", readiness=false. Elapsed: 20.094484652s Jun 8 12:28:20.294: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Running", Reason="", readiness=false. Elapsed: 22.098579182s Jun 8 12:28:22.298: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Running", Reason="", readiness=false. Elapsed: 24.10306068s Jun 8 12:28:24.303: INFO: Pod "pod-subpath-test-configmap-kj8n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.107822933s STEP: Saw pod success Jun 8 12:28:24.303: INFO: Pod "pod-subpath-test-configmap-kj8n" satisfied condition "success or failure" Jun 8 12:28:24.307: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-kj8n container test-container-subpath-configmap-kj8n: STEP: delete the pod Jun 8 12:28:24.348: INFO: Waiting for pod pod-subpath-test-configmap-kj8n to disappear Jun 8 12:28:24.352: INFO: Pod pod-subpath-test-configmap-kj8n no longer exists STEP: Deleting pod pod-subpath-test-configmap-kj8n Jun 8 12:28:24.352: INFO: Deleting pod "pod-subpath-test-configmap-kj8n" in namespace "e2e-tests-subpath-b89th" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:28:24.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-b89th" for this suite. Jun 8 12:28:30.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:28:30.418: INFO: namespace: e2e-tests-subpath-b89th, resource: bindings, ignored listing per whitelist Jun 8 12:28:30.461: INFO: namespace e2e-tests-subpath-b89th deletion completed in 6.103771864s • [SLOW TEST:32.380 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:28:30.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 12:28:30.588: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 8 12:28:35.593: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 8 12:28:35.594: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 8 12:28:37.598: INFO: Creating deployment "test-rollover-deployment" Jun 8 12:28:37.608: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 8 12:28:39.649: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 8 12:28:39.655: INFO: Ensure that both replica sets have 1 created replica Jun 8 12:28:39.660: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 8 12:28:39.666: INFO: Updating deployment test-rollover-deployment Jun 8 12:28:39.666: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 8 12:28:41.697: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 8 12:28:41.703: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 8 12:28:41.708: INFO: all replica sets need to contain the pod-template-hash label Jun 8 12:28:41.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216119, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 8 12:28:43.715: INFO: all replica sets need to contain the pod-template-hash label Jun 8 12:28:43.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 8 12:28:45.716: INFO: all replica sets need to contain the pod-template-hash label Jun 8 12:28:45.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 8 12:28:47.715: INFO: all replica sets need to contain the pod-template-hash label Jun 8 12:28:47.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 8 12:28:49.717: INFO: all replica sets need to contain the pod-template-hash label Jun 8 12:28:49.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 8 12:28:51.716: INFO: all replica sets need to contain the pod-template-hash label Jun 8 12:28:51.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 8 12:28:53.746: INFO: Jun 8 12:28:53.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216133, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216117, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 8 12:28:55.717: INFO: Jun 8 12:28:55.717: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 8 12:28:55.725: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-59rrf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59rrf/deployments/test-rollover-deployment,UID:93efbd46-a983-11ea-99e8-0242ac110002,ResourceVersion:14876103,Generation:2,CreationTimestamp:2020-06-08 12:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-08 12:28:37 +0000 UTC 2020-06-08 12:28:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-08 12:28:53 +0000 UTC 2020-06-08 12:28:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 8 12:28:55.728: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-59rrf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59rrf/replicasets/test-rollover-deployment-5b8479fdb6,UID:952b75c8-a983-11ea-99e8-0242ac110002,ResourceVersion:14876093,Generation:2,CreationTimestamp:2020-06-08 12:28:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 93efbd46-a983-11ea-99e8-0242ac110002 0xc0026ed6d7 0xc0026ed6d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 8 12:28:55.728: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 8 12:28:55.728: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-59rrf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59rrf/replicasets/test-rollover-controller,UID:8fc02357-a983-11ea-99e8-0242ac110002,ResourceVersion:14876102,Generation:2,CreationTimestamp:2020-06-08 12:28:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 93efbd46-a983-11ea-99e8-0242ac110002 0xc0026ecf97 0xc0026ecf98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 8 12:28:55.729: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-59rrf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59rrf/replicasets/test-rollover-deployment-58494b7559,UID:93f23928-a983-11ea-99e8-0242ac110002,ResourceVersion:14876057,Generation:2,CreationTimestamp:2020-06-08 12:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 93efbd46-a983-11ea-99e8-0242ac110002 0xc0026ed257 0xc0026ed258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 8 12:28:55.732: INFO: Pod "test-rollover-deployment-5b8479fdb6-55jjc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-55jjc,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-59rrf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-59rrf/pods/test-rollover-deployment-5b8479fdb6-55jjc,UID:9534e22d-a983-11ea-99e8-0242ac110002,ResourceVersion:14876071,Generation:0,CreationTimestamp:2020-06-08 12:28:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 952b75c8-a983-11ea-99e8-0242ac110002 0xc001e3abf7 0xc001e3abf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4wc59 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4wc59,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4wc59 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e3ae10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e3ae30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 12:28:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 12:28:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 12:28:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 12:28:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.48,StartTime:2020-06-08 12:28:39 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-08 12:28:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b461f36a14b15d2afba93ae41b41292695466845c23cf189ab4dde1ea5860e7a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:28:55.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-59rrf" for this suite. Jun 8 12:29:01.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:29:01.927: INFO: namespace: e2e-tests-deployment-59rrf, resource: bindings, ignored listing per whitelist Jun 8 12:29:01.995: INFO: namespace e2e-tests-deployment-59rrf deletion completed in 6.26018278s • [SLOW TEST:31.534 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:29:01.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jun 8 12:29:02.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:02.889: INFO: stderr: "" Jun 8 12:29:02.889: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 8 12:29:02.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:03.082: INFO: stderr: "" Jun 8 12:29:03.082: INFO: stdout: "update-demo-nautilus-hg948 update-demo-nautilus-xwwtt " Jun 8 12:29:03.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hg948 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:03.172: INFO: stderr: "" Jun 8 12:29:03.172: INFO: stdout: "" Jun 8 12:29:03.172: INFO: update-demo-nautilus-hg948 is created but not running Jun 8 12:29:08.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:08.278: INFO: stderr: "" Jun 8 12:29:08.278: INFO: stdout: "update-demo-nautilus-hg948 update-demo-nautilus-xwwtt " Jun 8 12:29:08.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hg948 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:08.383: INFO: stderr: "" Jun 8 12:29:08.383: INFO: stdout: "true" Jun 8 12:29:08.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hg948 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:08.489: INFO: stderr: "" Jun 8 12:29:08.490: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 8 12:29:08.490: INFO: validating pod update-demo-nautilus-hg948 Jun 8 12:29:08.494: INFO: got data: { "image": "nautilus.jpg" } Jun 8 12:29:08.494: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 8 12:29:08.494: INFO: update-demo-nautilus-hg948 is verified up and running Jun 8 12:29:08.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwwtt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:08.589: INFO: stderr: "" Jun 8 12:29:08.589: INFO: stdout: "true" Jun 8 12:29:08.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwwtt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:08.689: INFO: stderr: "" Jun 8 12:29:08.689: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 8 12:29:08.689: INFO: validating pod update-demo-nautilus-xwwtt Jun 8 12:29:08.693: INFO: got data: { "image": "nautilus.jpg" } Jun 8 12:29:08.693: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 8 12:29:08.693: INFO: update-demo-nautilus-xwwtt is verified up and running STEP: rolling-update to new replication controller Jun 8 12:29:08.695: INFO: scanned /root for discovery docs: Jun 8 12:29:08.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:31.798: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 8 12:29:31.798: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 8 12:29:31.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:32.052: INFO: stderr: "" Jun 8 12:29:32.052: INFO: stdout: "update-demo-kitten-hpqh4 update-demo-kitten-tzcll " Jun 8 12:29:32.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hpqh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:32.182: INFO: stderr: "" Jun 8 12:29:32.182: INFO: stdout: "true" Jun 8 12:29:32.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hpqh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:32.283: INFO: stderr: "" Jun 8 12:29:32.283: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 8 12:29:32.284: INFO: validating pod update-demo-kitten-hpqh4 Jun 8 12:29:32.287: INFO: got data: { "image": "kitten.jpg" } Jun 8 12:29:32.287: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 8 12:29:32.287: INFO: update-demo-kitten-hpqh4 is verified up and running Jun 8 12:29:32.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzcll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:32.392: INFO: stderr: "" Jun 8 12:29:32.392: INFO: stdout: "true" Jun 8 12:29:32.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzcll -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q2qf4' Jun 8 12:29:32.481: INFO: stderr: "" Jun 8 12:29:32.481: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 8 12:29:32.481: INFO: validating pod update-demo-kitten-tzcll Jun 8 12:29:32.485: INFO: got data: { "image": "kitten.jpg" } Jun 8 12:29:32.485: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 8 12:29:32.485: INFO: update-demo-kitten-tzcll is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:29:32.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q2qf4" for this suite. Jun 8 12:29:56.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:29:56.522: INFO: namespace: e2e-tests-kubectl-q2qf4, resource: bindings, ignored listing per whitelist Jun 8 12:29:56.580: INFO: namespace e2e-tests-kubectl-q2qf4 deletion completed in 24.092036095s • [SLOW TEST:54.585 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:29:56.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c3176bd1-a983-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 12:29:56.721: INFO: Waiting up to 5m0s for pod "pod-configmaps-c317f002-a983-11ea-978f-0242ac110018" in namespace "e2e-tests-configmap-zq6t4" to be "success or failure" Jun 8 12:29:56.740: INFO: Pod "pod-configmaps-c317f002-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.388074ms Jun 8 12:29:58.744: INFO: Pod "pod-configmaps-c317f002-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023021311s Jun 8 12:30:00.747: INFO: Pod "pod-configmaps-c317f002-a983-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.026759222s Jun 8 12:30:02.751: INFO: Pod "pod-configmaps-c317f002-a983-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030607933s STEP: Saw pod success Jun 8 12:30:02.751: INFO: Pod "pod-configmaps-c317f002-a983-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:30:02.754: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-c317f002-a983-11ea-978f-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 8 12:30:02.801: INFO: Waiting for pod pod-configmaps-c317f002-a983-11ea-978f-0242ac110018 to disappear Jun 8 12:30:02.810: INFO: Pod pod-configmaps-c317f002-a983-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:30:02.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zq6t4" for this suite. Jun 8 12:30:08.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:30:08.903: INFO: namespace: e2e-tests-configmap-zq6t4, resource: bindings, ignored listing per whitelist Jun 8 12:30:08.903: INFO: namespace e2e-tests-configmap-zq6t4 deletion completed in 6.089505121s • [SLOW TEST:12.322 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:30:08.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 8 12:30:08.993: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 8 12:30:09.034: INFO: Waiting for terminating namespaces to be deleted... Jun 8 12:30:09.037: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 8 12:30:09.042: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 8 12:30:09.042: INFO: Container kube-proxy ready: true, restart count 0 Jun 8 12:30:09.042: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 8 12:30:09.042: INFO: Container kindnet-cni ready: true, restart count 0 Jun 8 12:30:09.042: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 8 12:30:09.042: INFO: Container coredns ready: true, restart count 0 Jun 8 12:30:09.042: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 8 12:30:09.057: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 8 12:30:09.058: INFO: Container kube-proxy ready: true, restart count 0 Jun 8 12:30:09.058: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 8 12:30:09.058: INFO: Container kindnet-cni ready: true, restart count 0 Jun 8 12:30:09.058: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 8 12:30:09.058: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Jun 8 12:30:09.176: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Jun 8 12:30:09.176: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Jun 8 12:30:09.176: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Jun 8 12:30:09.176: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Jun 8 12:30:09.176: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Jun 8 12:30:09.176: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-ca853d53-a983-11ea-978f-0242ac110018.1616916b7ecd4758], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-f9rdp/filler-pod-ca853d53-a983-11ea-978f-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-ca853d53-a983-11ea-978f-0242ac110018.1616916bcc714772], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ca853d53-a983-11ea-978f-0242ac110018.1616916c233e6f20], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-ca853d53-a983-11ea-978f-0242ac110018.1616916c3aa01072], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-ca85f770-a983-11ea-978f-0242ac110018.1616916b8055a891], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-f9rdp/filler-pod-ca85f770-a983-11ea-978f-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-ca85f770-a983-11ea-978f-0242ac110018.1616916bdfcaa113], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ca85f770-a983-11ea-978f-0242ac110018.1616916c2da0dc73], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-ca85f770-a983-11ea-978f-0242ac110018.1616916c3cae6400], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.1616916c6fae07f3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:30:14.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-f9rdp" for this suite. Jun 8 12:30:22.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:30:22.457: INFO: namespace: e2e-tests-sched-pred-f9rdp, resource: bindings, ignored listing per whitelist Jun 8 12:30:22.530: INFO: namespace e2e-tests-sched-pred-f9rdp deletion completed in 8.164411445s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.626 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:30:22.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 8 12:30:22.664: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:22.667: INFO: Number of nodes with available pods: 0 Jun 8 12:30:22.667: INFO: Node hunter-worker is running more than one daemon pod Jun 8 12:30:23.671: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:23.674: INFO: Number of nodes with available pods: 0 Jun 8 12:30:23.674: INFO: Node hunter-worker is running more than one daemon pod Jun 8 12:30:24.729: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:24.740: INFO: Number of nodes with available pods: 0 Jun 8 12:30:24.740: INFO: Node hunter-worker is running more than one daemon pod Jun 8 12:30:25.672: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:25.676: INFO: Number of nodes with available pods: 0 Jun 8 12:30:25.676: INFO: Node hunter-worker is running more than one daemon pod Jun 8 12:30:26.672: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:26.676: INFO: Number of nodes with available pods: 0 Jun 8 12:30:26.676: INFO: Node hunter-worker is running more than one daemon pod Jun 8 12:30:27.671: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:27.674: INFO: Number of nodes with available pods: 2 Jun 8 12:30:27.674: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 8 12:30:27.707: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:27.710: INFO: Number of nodes with available pods: 1 Jun 8 12:30:27.710: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:28.716: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:28.720: INFO: Number of nodes with available pods: 1 Jun 8 12:30:28.720: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:29.715: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:29.719: INFO: Number of nodes with available pods: 1 Jun 8 12:30:29.719: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:30.716: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:30.719: INFO: Number of nodes with available pods: 1 Jun 8 12:30:30.719: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:31.715: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:31.718: INFO: Number of nodes with available pods: 1 Jun 8 12:30:31.718: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:32.716: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:32.720: INFO: Number of nodes with available pods: 1 Jun 8 12:30:32.720: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:33.715: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:33.720: INFO: Number of nodes with available pods: 1 Jun 8 12:30:33.720: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:34.716: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:34.720: INFO: Number of nodes with available pods: 1 Jun 8 12:30:34.720: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:35.715: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:35.718: INFO: Number of nodes with available pods: 1 Jun 8 12:30:35.718: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:36.715: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:36.719: INFO: Number of nodes with available pods: 1 Jun 8 12:30:36.719: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:37.715: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:37.719: INFO: Number of nodes with available pods: 1 Jun 8 12:30:37.719: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:38.715: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:38.720: INFO: Number of nodes with available pods: 1 Jun 8 12:30:38.720: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:39.715: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:39.719: INFO: Number of nodes with available pods: 1 Jun 8 12:30:39.719: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:40.714: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:40.718: INFO: Number of nodes with available pods: 1 Jun 8 12:30:40.718: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:41.714: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:41.736: INFO: Number of nodes with available pods: 1 Jun 8 12:30:41.736: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:42.715: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:42.719: INFO: Number of nodes with available pods: 1 Jun 8 12:30:42.719: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:43.714: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:43.718: INFO: Number of nodes with available pods: 1 Jun 8 12:30:43.718: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:44.716: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:44.720: INFO: Number of nodes with available pods: 1 Jun 8 12:30:44.720: INFO: Node hunter-worker2 is running more than one daemon pod Jun 8 12:30:45.716: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:30:45.719: INFO: Number of nodes with available pods: 2 Jun 8 12:30:45.719: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-hqgst, will wait for the garbage collector to delete the pods Jun 8 12:30:45.782: INFO: Deleting DaemonSet.extensions daemon-set took: 7.154646ms Jun 8 12:30:45.882: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.253666ms Jun 8 12:30:49.992: INFO: Number of nodes with available pods: 0 Jun 8 12:30:49.992: INFO: Number of running nodes: 0, number of available pods: 0 Jun 8 12:30:49.995: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hqgst/daemonsets","resourceVersion":"14876622"},"items":null} Jun 8 12:30:49.998: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hqgst/pods","resourceVersion":"14876622"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:30:50.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-hqgst" for this suite. Jun 8 12:30:56.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:30:56.065: INFO: namespace: e2e-tests-daemonsets-hqgst, resource: bindings, ignored listing per whitelist Jun 8 12:30:56.096: INFO: namespace e2e-tests-daemonsets-hqgst deletion completed in 6.08391862s • [SLOW TEST:33.565 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:30:56.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 8 12:30:56.185: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:31:03.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-qgjl4" for this suite. Jun 8 12:31:09.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:31:10.040: INFO: namespace: e2e-tests-init-container-qgjl4, resource: bindings, ignored listing per whitelist Jun 8 12:31:10.072: INFO: namespace e2e-tests-init-container-qgjl4 deletion completed in 6.097336803s • [SLOW TEST:13.976 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:31:10.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-eedf80f2-a983-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 12:31:10.208: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eee51d8c-a983-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-nmshb" to be "success or failure" Jun 8 12:31:10.236: INFO: Pod "pod-projected-configmaps-eee51d8c-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.055176ms Jun 8 12:31:12.240: INFO: Pod "pod-projected-configmaps-eee51d8c-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032390537s Jun 8 12:31:14.244: INFO: Pod "pod-projected-configmaps-eee51d8c-a983-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036557016s STEP: Saw pod success Jun 8 12:31:14.244: INFO: Pod "pod-projected-configmaps-eee51d8c-a983-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:31:14.247: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-eee51d8c-a983-11ea-978f-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 8 12:31:14.288: INFO: Waiting for pod pod-projected-configmaps-eee51d8c-a983-11ea-978f-0242ac110018 to disappear Jun 8 12:31:14.364: INFO: Pod pod-projected-configmaps-eee51d8c-a983-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:31:14.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nmshb" for this suite. Jun 8 12:31:20.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:31:20.472: INFO: namespace: e2e-tests-projected-nmshb, resource: bindings, ignored listing per whitelist Jun 8 12:31:20.538: INFO: namespace e2e-tests-projected-nmshb deletion completed in 6.169444494s • [SLOW TEST:10.466 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:31:20.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 12:31:20.630: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f51b47bf-a983-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-82m6z" to be "success or failure" Jun 8 12:31:20.706: INFO: Pod "downwardapi-volume-f51b47bf-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 75.821704ms Jun 8 12:31:22.709: INFO: Pod "downwardapi-volume-f51b47bf-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079209406s Jun 8 12:31:24.713: INFO: Pod "downwardapi-volume-f51b47bf-a983-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083756906s STEP: Saw pod success Jun 8 12:31:24.714: INFO: Pod "downwardapi-volume-f51b47bf-a983-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:31:24.716: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f51b47bf-a983-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 12:31:24.755: INFO: Waiting for pod downwardapi-volume-f51b47bf-a983-11ea-978f-0242ac110018 to disappear Jun 8 12:31:24.761: INFO: Pod downwardapi-volume-f51b47bf-a983-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:31:24.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-82m6z" for this suite. Jun 8 12:31:30.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:31:30.818: INFO: namespace: e2e-tests-projected-82m6z, resource: bindings, ignored listing per whitelist Jun 8 12:31:30.861: INFO: namespace e2e-tests-projected-82m6z deletion completed in 6.096739841s • [SLOW TEST:10.323 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:31:30.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-988sj/configmap-test-fb42b7f0-a983-11ea-978f-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 8 12:31:31.031: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb453579-a983-11ea-978f-0242ac110018" in namespace "e2e-tests-configmap-988sj" to be "success or failure" Jun 8 12:31:31.037: INFO: Pod "pod-configmaps-fb453579-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.720252ms Jun 8 12:31:33.089: INFO: Pod "pod-configmaps-fb453579-a983-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058365017s Jun 8 12:31:35.093: INFO: Pod "pod-configmaps-fb453579-a983-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061911634s STEP: Saw pod success Jun 8 12:31:35.093: INFO: Pod "pod-configmaps-fb453579-a983-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:31:35.095: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-fb453579-a983-11ea-978f-0242ac110018 container env-test: STEP: delete the pod Jun 8 12:31:35.146: INFO: Waiting for pod pod-configmaps-fb453579-a983-11ea-978f-0242ac110018 to disappear Jun 8 12:31:35.185: INFO: Pod pod-configmaps-fb453579-a983-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:31:35.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-988sj" for this suite. Jun 8 12:31:41.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:31:41.227: INFO: namespace: e2e-tests-configmap-988sj, resource: bindings, ignored listing per whitelist Jun 8 12:31:41.286: INFO: namespace e2e-tests-configmap-988sj deletion completed in 6.097807098s • [SLOW TEST:10.425 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:31:41.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0608 12:31:51.425568 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 8 12:31:51.425: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:31:51.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-cs9s9" for this suite. Jun 8 12:31:57.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:31:57.596: INFO: namespace: e2e-tests-gc-cs9s9, resource: bindings, ignored listing per whitelist Jun 8 12:31:57.635: INFO: namespace e2e-tests-gc-cs9s9 deletion completed in 6.207979543s • [SLOW TEST:16.348 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:31:57.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-nn5fl STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-nn5fl STEP: Deleting pre-stop pod Jun 8 12:32:10.802: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:32:10.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-nn5fl" for this suite. Jun 8 12:32:48.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:32:48.838: INFO: namespace: e2e-tests-prestop-nn5fl, resource: bindings, ignored listing per whitelist Jun 8 12:32:48.898: INFO: namespace e2e-tests-prestop-nn5fl deletion completed in 38.082259454s • [SLOW TEST:51.262 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:32:48.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-k5nx6 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jun 8 12:32:49.040: INFO: Found 0 stateful pods, waiting for 3 Jun 8 12:32:59.045: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:32:59.045: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:32:59.045: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 8 12:33:09.045: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:33:09.045: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:33:09.045: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 8 12:33:09.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k5nx6 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 12:33:09.346: INFO: stderr: "I0608 12:33:09.203090 3218 log.go:172] (0xc000138840) (0xc000772640) Create stream\nI0608 12:33:09.203151 3218 log.go:172] (0xc000138840) (0xc000772640) Stream added, broadcasting: 1\nI0608 12:33:09.204924 3218 log.go:172] (0xc000138840) Reply frame received for 1\nI0608 12:33:09.204962 3218 log.go:172] (0xc000138840) (0xc000682be0) Create stream\nI0608 12:33:09.204974 3218 log.go:172] (0xc000138840) (0xc000682be0) Stream added, broadcasting: 3\nI0608 12:33:09.206094 3218 log.go:172] (0xc000138840) Reply frame received for 3\nI0608 12:33:09.206114 3218 log.go:172] (0xc000138840) (0xc000682d20) Create stream\nI0608 12:33:09.206121 3218 log.go:172] (0xc000138840) (0xc000682d20) Stream added, broadcasting: 5\nI0608 12:33:09.207102 3218 log.go:172] (0xc000138840) Reply frame received for 5\nI0608 12:33:09.338244 3218 log.go:172] (0xc000138840) Data frame received for 3\nI0608 12:33:09.338269 3218 log.go:172] (0xc000682be0) (3) Data frame handling\nI0608 12:33:09.338288 3218 log.go:172] (0xc000682be0) (3) Data frame sent\nI0608 12:33:09.338299 3218 log.go:172] (0xc000138840) Data frame received for 3\nI0608 12:33:09.338306 3218 log.go:172] (0xc000682be0) (3) Data frame handling\nI0608 12:33:09.338453 3218 log.go:172] (0xc000138840) Data frame received for 5\nI0608 12:33:09.338490 3218 log.go:172] (0xc000682d20) (5) Data frame handling\nI0608 12:33:09.340236 3218 log.go:172] (0xc000138840) Data frame received for 1\nI0608 12:33:09.340265 3218 log.go:172] (0xc000772640) (1) Data frame handling\nI0608 12:33:09.340284 3218 log.go:172] (0xc000772640) (1) Data frame sent\nI0608 12:33:09.340379 3218 log.go:172] (0xc000138840) (0xc000772640) Stream removed, broadcasting: 1\nI0608 12:33:09.340444 3218 log.go:172] (0xc000138840) Go away received\nI0608 12:33:09.340515 3218 log.go:172] (0xc000138840) (0xc000772640) Stream removed, broadcasting: 1\nI0608 12:33:09.340533 3218 log.go:172] (0xc000138840) (0xc000682be0) Stream removed, broadcasting: 3\nI0608 12:33:09.340543 3218 log.go:172] (0xc000138840) (0xc000682d20) Stream removed, broadcasting: 5\n" Jun 8 12:33:09.347: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 12:33:09.347: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 8 12:33:19.379: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 8 12:33:29.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k5nx6 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 12:33:29.671: INFO: stderr: "I0608 12:33:29.577361 3241 log.go:172] (0xc000138840) (0xc00077e640) Create stream\nI0608 12:33:29.577413 3241 log.go:172] (0xc000138840) (0xc00077e640) Stream added, broadcasting: 1\nI0608 12:33:29.579449 3241 log.go:172] (0xc000138840) Reply frame received for 1\nI0608 12:33:29.579492 3241 log.go:172] (0xc000138840) (0xc00068ed20) Create stream\nI0608 12:33:29.579507 3241 log.go:172] (0xc000138840) (0xc00068ed20) Stream added, broadcasting: 3\nI0608 12:33:29.580629 3241 log.go:172] (0xc000138840) Reply frame received for 3\nI0608 12:33:29.580698 3241 log.go:172] (0xc000138840) (0xc00070c000) Create stream\nI0608 12:33:29.580724 3241 log.go:172] (0xc000138840) (0xc00070c000) Stream added, broadcasting: 5\nI0608 12:33:29.581896 3241 log.go:172] (0xc000138840) Reply frame received for 5\nI0608 12:33:29.662357 3241 log.go:172] (0xc000138840) Data frame received for 3\nI0608 12:33:29.662389 3241 log.go:172] (0xc00068ed20) (3) Data frame handling\nI0608 12:33:29.662413 3241 log.go:172] (0xc00068ed20) (3) Data frame sent\nI0608 12:33:29.662791 3241 log.go:172] (0xc000138840) Data frame received for 5\nI0608 12:33:29.662845 3241 log.go:172] (0xc00070c000) (5) Data frame handling\nI0608 12:33:29.662896 3241 log.go:172] (0xc000138840) Data frame received for 3\nI0608 12:33:29.662925 3241 log.go:172] (0xc00068ed20) (3) Data frame handling\nI0608 12:33:29.664001 3241 log.go:172] (0xc000138840) Data frame received for 1\nI0608 12:33:29.664023 3241 log.go:172] (0xc00077e640) (1) Data frame handling\nI0608 12:33:29.664037 3241 log.go:172] (0xc00077e640) (1) Data frame sent\nI0608 12:33:29.664047 3241 log.go:172] (0xc000138840) (0xc00077e640) Stream removed, broadcasting: 1\nI0608 12:33:29.664243 3241 log.go:172] (0xc000138840) Go away received\nI0608 12:33:29.664332 3241 log.go:172] (0xc000138840) (0xc00077e640) Stream removed, broadcasting: 1\nI0608 12:33:29.664358 3241 log.go:172] (0xc000138840) (0xc00068ed20) Stream removed, broadcasting: 3\nI0608 12:33:29.664378 3241 log.go:172] (0xc000138840) (0xc00070c000) Stream removed, broadcasting: 5\n" Jun 8 12:33:29.671: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 8 12:33:29.671: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 8 12:33:49.689: INFO: Waiting for StatefulSet e2e-tests-statefulset-k5nx6/ss2 to complete update Jun 8 12:33:49.689: INFO: Waiting for Pod e2e-tests-statefulset-k5nx6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jun 8 12:33:59.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k5nx6 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 8 12:33:59.992: INFO: stderr: "I0608 12:33:59.844406 3264 log.go:172] (0xc0006d6370) (0xc0002f5540) Create stream\nI0608 12:33:59.844488 3264 log.go:172] (0xc0006d6370) (0xc0002f5540) Stream added, broadcasting: 1\nI0608 12:33:59.850286 3264 log.go:172] (0xc0006d6370) Reply frame received for 1\nI0608 12:33:59.850329 3264 log.go:172] (0xc0006d6370) (0xc0006c0000) Create stream\nI0608 12:33:59.850338 3264 log.go:172] (0xc0006d6370) (0xc0006c0000) Stream added, broadcasting: 3\nI0608 12:33:59.851141 3264 log.go:172] (0xc0006d6370) Reply frame received for 3\nI0608 12:33:59.851166 3264 log.go:172] (0xc0006d6370) (0xc0002f55e0) Create stream\nI0608 12:33:59.851174 3264 log.go:172] (0xc0006d6370) (0xc0002f55e0) Stream added, broadcasting: 5\nI0608 12:33:59.851802 3264 log.go:172] (0xc0006d6370) Reply frame received for 5\nI0608 12:33:59.984761 3264 log.go:172] (0xc0006d6370) Data frame received for 3\nI0608 12:33:59.984820 3264 log.go:172] (0xc0006c0000) (3) Data frame handling\nI0608 12:33:59.984867 3264 log.go:172] (0xc0006c0000) (3) Data frame sent\nI0608 12:33:59.984963 3264 log.go:172] (0xc0006d6370) Data frame received for 3\nI0608 12:33:59.984991 3264 log.go:172] (0xc0006c0000) (3) Data frame handling\nI0608 12:33:59.985421 3264 log.go:172] (0xc0006d6370) Data frame received for 5\nI0608 12:33:59.985448 3264 log.go:172] (0xc0002f55e0) (5) Data frame handling\nI0608 12:33:59.987208 3264 log.go:172] (0xc0006d6370) Data frame received for 1\nI0608 12:33:59.987238 3264 log.go:172] (0xc0002f5540) (1) Data frame handling\nI0608 12:33:59.987271 3264 log.go:172] (0xc0002f5540) (1) Data frame sent\nI0608 12:33:59.987314 3264 log.go:172] (0xc0006d6370) (0xc0002f5540) Stream removed, broadcasting: 1\nI0608 12:33:59.987525 3264 log.go:172] (0xc0006d6370) (0xc0002f5540) Stream removed, broadcasting: 1\nI0608 12:33:59.987545 3264 log.go:172] (0xc0006d6370) (0xc0006c0000) Stream removed, broadcasting: 3\nI0608 12:33:59.987567 3264 log.go:172] (0xc0006d6370) (0xc0002f55e0) Stream removed, broadcasting: 5\n" Jun 8 12:33:59.993: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 8 12:33:59.993: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 8 12:34:10.054: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 8 12:34:20.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k5nx6 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 8 12:34:20.326: INFO: stderr: "I0608 12:34:20.241655 3287 log.go:172] (0xc000138160) (0xc00063a640) Create stream\nI0608 12:34:20.241748 3287 log.go:172] (0xc000138160) (0xc00063a640) Stream added, broadcasting: 1\nI0608 12:34:20.244411 3287 log.go:172] (0xc000138160) Reply frame received for 1\nI0608 12:34:20.244442 3287 log.go:172] (0xc000138160) (0xc00063a6e0) Create stream\nI0608 12:34:20.244450 3287 log.go:172] (0xc000138160) (0xc00063a6e0) Stream added, broadcasting: 3\nI0608 12:34:20.245659 3287 log.go:172] (0xc000138160) Reply frame received for 3\nI0608 12:34:20.245696 3287 log.go:172] (0xc000138160) (0xc000658000) Create stream\nI0608 12:34:20.245709 3287 log.go:172] (0xc000138160) (0xc000658000) Stream added, broadcasting: 5\nI0608 12:34:20.246582 3287 log.go:172] (0xc000138160) Reply frame received for 5\nI0608 12:34:20.318847 3287 log.go:172] (0xc000138160) Data frame received for 3\nI0608 12:34:20.318885 3287 log.go:172] (0xc00063a6e0) (3) Data frame handling\nI0608 12:34:20.318901 3287 log.go:172] (0xc00063a6e0) (3) Data frame sent\nI0608 12:34:20.318909 3287 log.go:172] (0xc000138160) Data frame received for 3\nI0608 12:34:20.318915 3287 log.go:172] (0xc00063a6e0) (3) Data frame handling\nI0608 12:34:20.318945 3287 log.go:172] (0xc000138160) Data frame received for 5\nI0608 12:34:20.318953 3287 log.go:172] (0xc000658000) (5) Data frame handling\nI0608 12:34:20.320267 3287 log.go:172] (0xc000138160) Data frame received for 1\nI0608 12:34:20.320280 3287 log.go:172] (0xc00063a640) (1) Data frame handling\nI0608 12:34:20.320288 3287 log.go:172] (0xc00063a640) (1) Data frame sent\nI0608 12:34:20.320297 3287 log.go:172] (0xc000138160) (0xc00063a640) Stream removed, broadcasting: 1\nI0608 12:34:20.320446 3287 log.go:172] (0xc000138160) (0xc00063a640) Stream removed, broadcasting: 1\nI0608 12:34:20.320458 3287 log.go:172] (0xc000138160) (0xc00063a6e0) Stream removed, broadcasting: 3\nI0608 12:34:20.320570 3287 log.go:172] (0xc000138160) (0xc000658000) Stream removed, broadcasting: 5\n" Jun 8 12:34:20.326: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 8 12:34:20.326: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 8 12:34:40.348: INFO: Deleting all statefulset in ns e2e-tests-statefulset-k5nx6 Jun 8 12:34:40.350: INFO: Scaling statefulset ss2 to 0 Jun 8 12:35:20.369: INFO: Waiting for statefulset status.replicas updated to 0 Jun 8 12:35:20.372: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:35:20.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-k5nx6" for this suite. Jun 8 12:35:28.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:35:28.328: INFO: namespace: e2e-tests-statefulset-k5nx6, resource: bindings, ignored listing per whitelist Jun 8 12:35:28.337: INFO: namespace e2e-tests-statefulset-k5nx6 deletion completed in 6.793128738s • [SLOW TEST:159.439 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:35:28.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 8 12:35:28.454: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 8 12:35:28.512: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 8 12:35:33.518: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 8 12:35:33.518: INFO: Creating deployment "test-rolling-update-deployment" Jun 8 12:35:33.522: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 8 12:35:33.529: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 8 12:35:35.538: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 8 12:35:35.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216533, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216533, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216533, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727216533, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 8 12:35:37.633: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 8 12:35:37.643: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-f2mdb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f2mdb/deployments/test-rolling-update-deployment,UID:8bd81ebd-a984-11ea-99e8-0242ac110002,ResourceVersion:14877789,Generation:1,CreationTimestamp:2020-06-08 12:35:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-08 12:35:33 +0000 UTC 2020-06-08 12:35:33 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-08 12:35:36 +0000 UTC 2020-06-08 12:35:33 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 8 12:35:37.646: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-f2mdb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f2mdb/replicasets/test-rolling-update-deployment-75db98fb4c,UID:8bda8619-a984-11ea-99e8-0242ac110002,ResourceVersion:14877780,Generation:1,CreationTimestamp:2020-06-08 12:35:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8bd81ebd-a984-11ea-99e8-0242ac110002 0xc0026ec6e7 0xc0026ec6e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 8 12:35:37.646: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 8 12:35:37.646: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-f2mdb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f2mdb/replicasets/test-rolling-update-controller,UID:88d378e8-a984-11ea-99e8-0242ac110002,ResourceVersion:14877788,Generation:2,CreationTimestamp:2020-06-08 12:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8bd81ebd-a984-11ea-99e8-0242ac110002 0xc0026ec437 0xc0026ec438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 8 12:35:37.649: INFO: Pod "test-rolling-update-deployment-75db98fb4c-4gzxl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-4gzxl,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-f2mdb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-f2mdb/pods/test-rolling-update-deployment-75db98fb4c-4gzxl,UID:8be4fbb8-a984-11ea-99e8-0242ac110002,ResourceVersion:14877779,Generation:0,CreationTimestamp:2020-06-08 12:35:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 8bda8619-a984-11ea-99e8-0242ac110002 0xc0026edc17 0xc0026edc18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pgrjf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgrjf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pgrjf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026edc90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026edcb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 12:35:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 12:35:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 12:35:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-08 12:35:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.61,StartTime:2020-06-08 12:35:33 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-08 12:35:36 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://d0232df63724d8394a36c15e2d5f2245ad2023d5ba24807a6c0ea5f29c34095a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:35:37.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-f2mdb" for this suite. Jun 8 12:35:45.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:35:45.740: INFO: namespace: e2e-tests-deployment-f2mdb, resource: bindings, ignored listing per whitelist Jun 8 12:35:45.782: INFO: namespace e2e-tests-deployment-f2mdb deletion completed in 8.12956585s • [SLOW TEST:17.445 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:35:45.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 8 12:35:45.905: INFO: Waiting up to 5m0s for pod "pod-9337b298-a984-11ea-978f-0242ac110018" in namespace "e2e-tests-emptydir-z877b" to be "success or failure" Jun 8 12:35:45.924: INFO: Pod "pod-9337b298-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.514052ms Jun 8 12:35:47.950: INFO: Pod "pod-9337b298-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044403057s Jun 8 12:35:49.953: INFO: Pod "pod-9337b298-a984-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047994929s STEP: Saw pod success Jun 8 12:35:49.953: INFO: Pod "pod-9337b298-a984-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:35:49.956: INFO: Trying to get logs from node hunter-worker2 pod pod-9337b298-a984-11ea-978f-0242ac110018 container test-container: STEP: delete the pod Jun 8 12:35:50.099: INFO: Waiting for pod pod-9337b298-a984-11ea-978f-0242ac110018 to disappear Jun 8 12:35:50.121: INFO: Pod pod-9337b298-a984-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:35:50.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-z877b" for this suite. Jun 8 12:35:56.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:35:56.190: INFO: namespace: e2e-tests-emptydir-z877b, resource: bindings, ignored listing per whitelist Jun 8 12:35:56.217: INFO: namespace e2e-tests-emptydir-z877b deletion completed in 6.093375548s • [SLOW TEST:10.435 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:35:56.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 8 12:36:04.591: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 8 12:36:04.606: INFO: Pod pod-with-poststart-http-hook still exists Jun 8 12:36:06.606: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 8 12:36:06.645: INFO: Pod pod-with-poststart-http-hook still exists Jun 8 12:36:08.606: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 8 12:36:08.615: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:36:08.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-xc6gp" for this suite. Jun 8 12:36:30.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:36:30.656: INFO: namespace: e2e-tests-container-lifecycle-hook-xc6gp, resource: bindings, ignored listing per whitelist Jun 8 12:36:30.730: INFO: namespace e2e-tests-container-lifecycle-hook-xc6gp deletion completed in 22.111244731s • [SLOW TEST:34.513 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:36:30.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 8 12:36:30.872: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 8 12:36:30.880: INFO: Waiting for terminating namespaces to be deleted... Jun 8 12:36:30.882: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 8 12:36:30.888: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 8 12:36:30.889: INFO: Container kube-proxy ready: true, restart count 0 Jun 8 12:36:30.889: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 8 12:36:30.889: INFO: Container kindnet-cni ready: true, restart count 0 Jun 8 12:36:30.889: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 8 12:36:30.889: INFO: Container coredns ready: true, restart count 0 Jun 8 12:36:30.889: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 8 12:36:30.894: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 8 12:36:30.894: INFO: Container kindnet-cni ready: true, restart count 0 Jun 8 12:36:30.894: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 8 12:36:30.894: INFO: Container coredns ready: true, restart count 0 Jun 8 12:36:30.894: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 8 12:36:30.894: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b07abac0-a984-11ea-978f-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b07abac0-a984-11ea-978f-0242ac110018 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b07abac0-a984-11ea-978f-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:36:39.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-j6m59" for this suite. Jun 8 12:36:57.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:36:57.125: INFO: namespace: e2e-tests-sched-pred-j6m59, resource: bindings, ignored listing per whitelist Jun 8 12:36:57.159: INFO: namespace e2e-tests-sched-pred-j6m59 deletion completed in 18.091280295s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:26.428 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:36:57.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 12:36:57.400: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bdd3053b-a984-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-qjk9k" to be "success or failure" Jun 8 12:36:57.446: INFO: Pod "downwardapi-volume-bdd3053b-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 45.843512ms Jun 8 12:36:59.632: INFO: Pod "downwardapi-volume-bdd3053b-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231805494s Jun 8 12:37:01.637: INFO: Pod "downwardapi-volume-bdd3053b-a984-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.236841715s Jun 8 12:37:03.641: INFO: Pod "downwardapi-volume-bdd3053b-a984-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.240871978s STEP: Saw pod success Jun 8 12:37:03.641: INFO: Pod "downwardapi-volume-bdd3053b-a984-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:37:03.644: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-bdd3053b-a984-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 12:37:03.742: INFO: Waiting for pod downwardapi-volume-bdd3053b-a984-11ea-978f-0242ac110018 to disappear Jun 8 12:37:03.844: INFO: Pod downwardapi-volume-bdd3053b-a984-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:37:03.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qjk9k" for this suite. Jun 8 12:37:10.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:37:10.061: INFO: namespace: e2e-tests-projected-qjk9k, resource: bindings, ignored listing per whitelist Jun 8 12:37:10.121: INFO: namespace e2e-tests-projected-qjk9k deletion completed in 6.273783546s • [SLOW TEST:12.962 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:37:10.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-lwqlx/secret-test-c57d7fac-a984-11ea-978f-0242ac110018 STEP: Creating a pod to test consume secrets Jun 8 12:37:10.327: INFO: Waiting up to 5m0s for pod "pod-configmaps-c57ef715-a984-11ea-978f-0242ac110018" in namespace "e2e-tests-secrets-lwqlx" to be "success or failure" Jun 8 12:37:10.333: INFO: Pod "pod-configmaps-c57ef715-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.781241ms Jun 8 12:37:12.337: INFO: Pod "pod-configmaps-c57ef715-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009930371s Jun 8 12:37:14.341: INFO: Pod "pod-configmaps-c57ef715-a984-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013943474s STEP: Saw pod success Jun 8 12:37:14.342: INFO: Pod "pod-configmaps-c57ef715-a984-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:37:14.344: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-c57ef715-a984-11ea-978f-0242ac110018 container env-test: STEP: delete the pod Jun 8 12:37:14.381: INFO: Waiting for pod pod-configmaps-c57ef715-a984-11ea-978f-0242ac110018 to disappear Jun 8 12:37:14.387: INFO: Pod pod-configmaps-c57ef715-a984-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:37:14.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lwqlx" for this suite. Jun 8 12:37:20.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:37:20.480: INFO: namespace: e2e-tests-secrets-lwqlx, resource: bindings, ignored listing per whitelist Jun 8 12:37:20.500: INFO: namespace e2e-tests-secrets-lwqlx deletion completed in 6.109774791s • [SLOW TEST:10.378 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:37:20.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 8 12:37:20.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbb5c838-a984-11ea-978f-0242ac110018" in namespace "e2e-tests-projected-d2spd" to be "success or failure" Jun 8 12:37:20.699: INFO: Pod "downwardapi-volume-cbb5c838-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.538633ms Jun 8 12:37:22.718: INFO: Pod "downwardapi-volume-cbb5c838-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022700341s Jun 8 12:37:24.722: INFO: Pod "downwardapi-volume-cbb5c838-a984-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026945229s STEP: Saw pod success Jun 8 12:37:24.722: INFO: Pod "downwardapi-volume-cbb5c838-a984-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:37:24.725: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-cbb5c838-a984-11ea-978f-0242ac110018 container client-container: STEP: delete the pod Jun 8 12:37:24.740: INFO: Waiting for pod downwardapi-volume-cbb5c838-a984-11ea-978f-0242ac110018 to disappear Jun 8 12:37:24.745: INFO: Pod downwardapi-volume-cbb5c838-a984-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:37:24.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d2spd" for this suite. Jun 8 12:37:30.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:37:30.864: INFO: namespace: e2e-tests-projected-d2spd, resource: bindings, ignored listing per whitelist Jun 8 12:37:30.930: INFO: namespace e2e-tests-projected-d2spd deletion completed in 6.181777455s • [SLOW TEST:10.430 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:37:30.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 8 12:37:31.078: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:37:31.080: INFO: Number of nodes with available pods: 0 Jun 8 12:37:31.080: INFO: Node hunter-worker is running more than one daemon pod Jun 8 12:37:32.085: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:37:32.089: INFO: Number of nodes with available pods: 0 Jun 8 12:37:32.089: INFO: Node hunter-worker is running more than one daemon pod Jun 8 12:37:33.084: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:37:33.087: INFO: Number of nodes with available pods: 0 Jun 8 12:37:33.087: INFO: Node hunter-worker is running more than one daemon pod Jun 8 12:37:34.186: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:37:34.191: INFO: Number of nodes with available pods: 0 Jun 8 12:37:34.191: INFO: Node hunter-worker is running more than one daemon pod Jun 8 12:37:35.096: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:37:35.100: INFO: Number of nodes with available pods: 2 Jun 8 12:37:35.100: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 8 12:37:35.119: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 8 12:37:35.142: INFO: Number of nodes with available pods: 2 Jun 8 12:37:35.142: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lc72f, will wait for the garbage collector to delete the pods Jun 8 12:37:36.225: INFO: Deleting DaemonSet.extensions daemon-set took: 7.105193ms Jun 8 12:37:36.426: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.270438ms Jun 8 12:37:41.832: INFO: Number of nodes with available pods: 0 Jun 8 12:37:41.832: INFO: Number of running nodes: 0, number of available pods: 0 Jun 8 12:37:41.835: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lc72f/daemonsets","resourceVersion":"14878293"},"items":null} Jun 8 12:37:41.837: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lc72f/pods","resourceVersion":"14878293"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:37:41.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-lc72f" for this suite. Jun 8 12:37:47.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:37:47.913: INFO: namespace: e2e-tests-daemonsets-lc72f, resource: bindings, ignored listing per whitelist Jun 8 12:37:47.931: INFO: namespace e2e-tests-daemonsets-lc72f deletion completed in 6.083846405s • [SLOW TEST:17.001 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:37:47.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 8 12:37:55.712: INFO: 0 pods remaining Jun 8 12:37:55.712: INFO: 0 pods has nil DeletionTimestamp Jun 8 12:37:55.712: INFO: Jun 8 12:37:56.796: INFO: 0 pods remaining Jun 8 12:37:56.796: INFO: 0 pods has nil DeletionTimestamp Jun 8 12:37:56.796: INFO: STEP: Gathering metrics W0608 12:37:58.701935 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 8 12:37:58.702: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:37:58.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qwfws" for this suite. Jun 8 12:38:06.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:38:06.989: INFO: namespace: e2e-tests-gc-qwfws, resource: bindings, ignored listing per whitelist Jun 8 12:38:07.031: INFO: namespace e2e-tests-gc-qwfws deletion completed in 8.276425214s • [SLOW TEST:19.099 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:38:07.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:38:14.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-bn7d8" for this suite. Jun 8 12:38:36.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:38:36.263: INFO: namespace: e2e-tests-replication-controller-bn7d8, resource: bindings, ignored listing per whitelist Jun 8 12:38:36.298: INFO: namespace e2e-tests-replication-controller-bn7d8 deletion completed in 22.122392267s • [SLOW TEST:29.267 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:38:36.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 8 12:38:36.440: INFO: Waiting up to 5m0s for pod "downward-api-f8dad7b4-a984-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-5rvcp" to be "success or failure" Jun 8 12:38:36.448: INFO: Pod "downward-api-f8dad7b4-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18335ms Jun 8 12:38:38.452: INFO: Pod "downward-api-f8dad7b4-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012638914s Jun 8 12:38:40.456: INFO: Pod "downward-api-f8dad7b4-a984-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016539967s Jun 8 12:38:42.461: INFO: Pod "downward-api-f8dad7b4-a984-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020881151s STEP: Saw pod success Jun 8 12:38:42.461: INFO: Pod "downward-api-f8dad7b4-a984-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:38:42.464: INFO: Trying to get logs from node hunter-worker pod downward-api-f8dad7b4-a984-11ea-978f-0242ac110018 container dapi-container: STEP: delete the pod Jun 8 12:38:42.496: INFO: Waiting for pod downward-api-f8dad7b4-a984-11ea-978f-0242ac110018 to disappear Jun 8 12:38:42.508: INFO: Pod downward-api-f8dad7b4-a984-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:38:42.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5rvcp" for this suite. Jun 8 12:38:50.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:38:50.568: INFO: namespace: e2e-tests-downward-api-5rvcp, resource: bindings, ignored listing per whitelist Jun 8 12:38:50.598: INFO: namespace e2e-tests-downward-api-5rvcp deletion completed in 8.085475855s • [SLOW TEST:14.300 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:38:50.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-kg2kz STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 8 12:38:50.693: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 8 12:39:18.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.76:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-kg2kz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:39:18.798: INFO: >>> kubeConfig: /root/.kube/config I0608 12:39:18.835829 6 log.go:172] (0xc000df73f0) (0xc002aea780) Create stream I0608 12:39:18.835867 6 log.go:172] (0xc000df73f0) (0xc002aea780) Stream added, broadcasting: 1 I0608 12:39:18.839563 6 log.go:172] (0xc000df73f0) Reply frame received for 1 I0608 12:39:18.839625 6 log.go:172] (0xc000df73f0) (0xc002ab65a0) Create stream I0608 12:39:18.839654 6 log.go:172] (0xc000df73f0) (0xc002ab65a0) Stream added, broadcasting: 3 I0608 12:39:18.841861 6 log.go:172] (0xc000df73f0) Reply frame received for 3 I0608 12:39:18.841892 6 log.go:172] (0xc000df73f0) (0xc002800320) Create stream I0608 12:39:18.841907 6 log.go:172] (0xc000df73f0) (0xc002800320) Stream added, broadcasting: 5 I0608 12:39:18.843094 6 log.go:172] (0xc000df73f0) Reply frame received for 5 I0608 12:39:18.914435 6 log.go:172] (0xc000df73f0) Data frame received for 3 I0608 12:39:18.914471 6 log.go:172] (0xc002ab65a0) (3) Data frame handling I0608 12:39:18.914490 6 log.go:172] (0xc002ab65a0) (3) Data frame sent I0608 12:39:18.914499 6 log.go:172] (0xc000df73f0) Data frame received for 3 I0608 12:39:18.914506 6 log.go:172] (0xc002ab65a0) (3) Data frame handling I0608 12:39:18.914570 6 log.go:172] (0xc000df73f0) Data frame received for 5 I0608 12:39:18.914590 6 log.go:172] (0xc002800320) (5) Data frame handling I0608 12:39:18.915983 6 log.go:172] (0xc000df73f0) Data frame received for 1 I0608 12:39:18.916003 6 log.go:172] (0xc002aea780) (1) Data frame handling I0608 12:39:18.916011 6 log.go:172] (0xc002aea780) (1) Data frame sent I0608 12:39:18.916020 6 log.go:172] (0xc000df73f0) (0xc002aea780) Stream removed, broadcasting: 1 I0608 12:39:18.916028 6 log.go:172] (0xc000df73f0) Go away received I0608 12:39:18.916279 6 log.go:172] (0xc000df73f0) (0xc002aea780) Stream removed, broadcasting: 1 I0608 12:39:18.916319 6 log.go:172] (0xc000df73f0) (0xc002ab65a0) Stream removed, broadcasting: 3 I0608 12:39:18.916344 6 log.go:172] (0xc000df73f0) (0xc002800320) Stream removed, broadcasting: 5 Jun 8 12:39:18.916: INFO: Found all expected endpoints: [netserver-0] Jun 8 12:39:18.919: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.226:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-kg2kz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 8 12:39:18.919: INFO: >>> kubeConfig: /root/.kube/config I0608 12:39:18.943201 6 log.go:172] (0xc000e5c370) (0xc0022da320) Create stream I0608 12:39:18.943225 6 log.go:172] (0xc000e5c370) (0xc0022da320) Stream added, broadcasting: 1 I0608 12:39:18.946471 6 log.go:172] (0xc000e5c370) Reply frame received for 1 I0608 12:39:18.946528 6 log.go:172] (0xc000e5c370) (0xc0028003c0) Create stream I0608 12:39:18.946543 6 log.go:172] (0xc000e5c370) (0xc0028003c0) Stream added, broadcasting: 3 I0608 12:39:18.947545 6 log.go:172] (0xc000e5c370) Reply frame received for 3 I0608 12:39:18.947593 6 log.go:172] (0xc000e5c370) (0xc002800500) Create stream I0608 12:39:18.947609 6 log.go:172] (0xc000e5c370) (0xc002800500) Stream added, broadcasting: 5 I0608 12:39:18.948659 6 log.go:172] (0xc000e5c370) Reply frame received for 5 I0608 12:39:19.014628 6 log.go:172] (0xc000e5c370) Data frame received for 3 I0608 12:39:19.014657 6 log.go:172] (0xc0028003c0) (3) Data frame handling I0608 12:39:19.014676 6 log.go:172] (0xc0028003c0) (3) Data frame sent I0608 12:39:19.014744 6 log.go:172] (0xc000e5c370) Data frame received for 3 I0608 12:39:19.014752 6 log.go:172] (0xc0028003c0) (3) Data frame handling I0608 12:39:19.014799 6 log.go:172] (0xc000e5c370) Data frame received for 5 I0608 12:39:19.014840 6 log.go:172] (0xc002800500) (5) Data frame handling I0608 12:39:19.016420 6 log.go:172] (0xc000e5c370) Data frame received for 1 I0608 12:39:19.016446 6 log.go:172] (0xc0022da320) (1) Data frame handling I0608 12:39:19.016467 6 log.go:172] (0xc0022da320) (1) Data frame sent I0608 12:39:19.016488 6 log.go:172] (0xc000e5c370) (0xc0022da320) Stream removed, broadcasting: 1 I0608 12:39:19.016534 6 log.go:172] (0xc000e5c370) Go away received I0608 12:39:19.016580 6 log.go:172] (0xc000e5c370) (0xc0022da320) Stream removed, broadcasting: 1 I0608 12:39:19.016592 6 log.go:172] (0xc000e5c370) (0xc0028003c0) Stream removed, broadcasting: 3 I0608 12:39:19.016600 6 log.go:172] (0xc000e5c370) (0xc002800500) Stream removed, broadcasting: 5 Jun 8 12:39:19.016: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:39:19.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-kg2kz" for this suite. Jun 8 12:39:43.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:39:43.090: INFO: namespace: e2e-tests-pod-network-test-kg2kz, resource: bindings, ignored listing per whitelist Jun 8 12:39:43.153: INFO: namespace e2e-tests-pod-network-test-kg2kz deletion completed in 24.132676026s • [SLOW TEST:52.554 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:39:43.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jun 8 12:39:43.881: INFO: Waiting up to 5m0s for pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz" in namespace "e2e-tests-svcaccounts-9r4b2" to be "success or failure" Jun 8 12:39:43.887: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz": Phase="Pending", Reason="", readiness=false. Elapsed: 5.893494ms Jun 8 12:39:46.140: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259013815s Jun 8 12:39:48.211: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330328771s Jun 8 12:39:50.403: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521909099s Jun 8 12:39:52.407: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.526458284s STEP: Saw pod success Jun 8 12:39:52.407: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz" satisfied condition "success or failure" Jun 8 12:39:52.411: INFO: Trying to get logs from node hunter-worker pod pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz container token-test: STEP: delete the pod Jun 8 12:39:52.482: INFO: Waiting for pod pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz to disappear Jun 8 12:39:52.487: INFO: Pod pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-hxrcz no longer exists STEP: Creating a pod to test consume service account root CA Jun 8 12:39:52.491: INFO: Waiting up to 5m0s for pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x" in namespace "e2e-tests-svcaccounts-9r4b2" to be "success or failure" Jun 8 12:39:52.504: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x": Phase="Pending", Reason="", readiness=false. Elapsed: 13.274588ms Jun 8 12:39:54.509: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018330332s Jun 8 12:39:56.514: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022925653s Jun 8 12:39:58.519: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027819422s Jun 8 12:40:00.523: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031981646s STEP: Saw pod success Jun 8 12:40:00.523: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x" satisfied condition "success or failure" Jun 8 12:40:00.526: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x container root-ca-test: STEP: delete the pod Jun 8 12:40:00.570: INFO: Waiting for pod pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x to disappear Jun 8 12:40:00.595: INFO: Pod pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-wdw9x no longer exists STEP: Creating a pod to test consume service account namespace Jun 8 12:40:00.599: INFO: Waiting up to 5m0s for pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs" in namespace "e2e-tests-svcaccounts-9r4b2" to be "success or failure" Jun 8 12:40:00.654: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs": Phase="Pending", Reason="", readiness=false. Elapsed: 55.356553ms Jun 8 12:40:02.659: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059961842s Jun 8 12:40:04.663: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064035732s Jun 8 12:40:06.667: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06792319s Jun 8 12:40:08.671: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072201572s STEP: Saw pod success Jun 8 12:40:08.671: INFO: Pod "pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs" satisfied condition "success or failure" Jun 8 12:40:08.674: INFO: Trying to get logs from node hunter-worker pod pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs container namespace-test: STEP: delete the pod Jun 8 12:40:08.703: INFO: Waiting for pod pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs to disappear Jun 8 12:40:08.714: INFO: Pod pod-service-account-21113f6a-a985-11ea-978f-0242ac110018-9v6gs no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:40:08.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-9r4b2" for this suite. Jun 8 12:40:16.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:40:16.760: INFO: namespace: e2e-tests-svcaccounts-9r4b2, resource: bindings, ignored listing per whitelist Jun 8 12:40:16.825: INFO: namespace e2e-tests-svcaccounts-9r4b2 deletion completed in 8.108813087s • [SLOW TEST:33.672 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:40:16.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 8 12:40:17.010: INFO: Waiting up to 5m0s for pod "downward-api-34d085a5-a985-11ea-978f-0242ac110018" in namespace "e2e-tests-downward-api-srrj4" to be "success or failure" Jun 8 12:40:17.020: INFO: Pod "downward-api-34d085a5-a985-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.032012ms Jun 8 12:40:19.023: INFO: Pod "downward-api-34d085a5-a985-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012578213s Jun 8 12:40:21.028: INFO: Pod "downward-api-34d085a5-a985-11ea-978f-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.017136238s Jun 8 12:40:23.032: INFO: Pod "downward-api-34d085a5-a985-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021824423s STEP: Saw pod success Jun 8 12:40:23.032: INFO: Pod "downward-api-34d085a5-a985-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:40:23.036: INFO: Trying to get logs from node hunter-worker2 pod downward-api-34d085a5-a985-11ea-978f-0242ac110018 container dapi-container: STEP: delete the pod Jun 8 12:40:23.063: INFO: Waiting for pod downward-api-34d085a5-a985-11ea-978f-0242ac110018 to disappear Jun 8 12:40:23.069: INFO: Pod downward-api-34d085a5-a985-11ea-978f-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:40:23.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-srrj4" for this suite. Jun 8 12:40:29.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:40:29.189: INFO: namespace: e2e-tests-downward-api-srrj4, resource: bindings, ignored listing per whitelist Jun 8 12:40:29.190: INFO: namespace e2e-tests-downward-api-srrj4 deletion completed in 6.117026376s • [SLOW TEST:12.365 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:40:29.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 8 12:40:37.406: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:37.415: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:39.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:39.418: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:41.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:41.419: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:43.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:43.419: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:45.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:45.419: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:47.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:47.419: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:49.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:49.418: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:51.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:51.434: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:53.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:53.419: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:55.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:55.419: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:57.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:57.419: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:40:59.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:40:59.428: INFO: Pod pod-with-prestop-exec-hook still exists Jun 8 12:41:01.415: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 8 12:41:01.419: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:41:01.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-f8rdg" for this suite. Jun 8 12:41:23.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:41:23.481: INFO: namespace: e2e-tests-container-lifecycle-hook-f8rdg, resource: bindings, ignored listing per whitelist Jun 8 12:41:23.542: INFO: namespace e2e-tests-container-lifecycle-hook-f8rdg deletion completed in 22.111015884s • [SLOW TEST:54.352 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:41:23.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 8 12:41:23.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:26.356: INFO: stderr: "" Jun 8 12:41:26.356: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 8 12:41:26.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:26.516: INFO: stderr: "" Jun 8 12:41:26.516: INFO: stdout: "update-demo-nautilus-h49dc update-demo-nautilus-wpxrr " Jun 8 12:41:26.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h49dc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:26.615: INFO: stderr: "" Jun 8 12:41:26.615: INFO: stdout: "" Jun 8 12:41:26.615: INFO: update-demo-nautilus-h49dc is created but not running Jun 8 12:41:31.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:31.735: INFO: stderr: "" Jun 8 12:41:31.736: INFO: stdout: "update-demo-nautilus-h49dc update-demo-nautilus-wpxrr " Jun 8 12:41:31.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h49dc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:31.831: INFO: stderr: "" Jun 8 12:41:31.831: INFO: stdout: "true" Jun 8 12:41:31.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h49dc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:31.923: INFO: stderr: "" Jun 8 12:41:31.923: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 8 12:41:31.923: INFO: validating pod update-demo-nautilus-h49dc Jun 8 12:41:31.927: INFO: got data: { "image": "nautilus.jpg" } Jun 8 12:41:31.927: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 8 12:41:31.927: INFO: update-demo-nautilus-h49dc is verified up and running Jun 8 12:41:31.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wpxrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:32.032: INFO: stderr: "" Jun 8 12:41:32.032: INFO: stdout: "true" Jun 8 12:41:32.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wpxrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:32.127: INFO: stderr: "" Jun 8 12:41:32.127: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 8 12:41:32.127: INFO: validating pod update-demo-nautilus-wpxrr Jun 8 12:41:32.131: INFO: got data: { "image": "nautilus.jpg" } Jun 8 12:41:32.131: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 8 12:41:32.131: INFO: update-demo-nautilus-wpxrr is verified up and running STEP: scaling down the replication controller Jun 8 12:41:32.133: INFO: scanned /root for discovery docs: Jun 8 12:41:32.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:33.268: INFO: stderr: "" Jun 8 12:41:33.269: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 8 12:41:33.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:33.372: INFO: stderr: "" Jun 8 12:41:33.372: INFO: stdout: "update-demo-nautilus-h49dc update-demo-nautilus-wpxrr " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 8 12:41:38.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:38.489: INFO: stderr: "" Jun 8 12:41:38.489: INFO: stdout: "update-demo-nautilus-h49dc update-demo-nautilus-wpxrr " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 8 12:41:43.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:43.601: INFO: stderr: "" Jun 8 12:41:43.601: INFO: stdout: "update-demo-nautilus-wpxrr " Jun 8 12:41:43.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wpxrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:43.717: INFO: stderr: "" Jun 8 12:41:43.717: INFO: stdout: "true" Jun 8 12:41:43.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wpxrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:43.825: INFO: stderr: "" Jun 8 12:41:43.825: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 8 12:41:43.825: INFO: validating pod update-demo-nautilus-wpxrr Jun 8 12:41:43.828: INFO: got data: { "image": "nautilus.jpg" } Jun 8 12:41:43.828: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 8 12:41:43.828: INFO: update-demo-nautilus-wpxrr is verified up and running STEP: scaling up the replication controller Jun 8 12:41:43.830: INFO: scanned /root for discovery docs: Jun 8 12:41:43.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:44.966: INFO: stderr: "" Jun 8 12:41:44.966: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 8 12:41:44.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:45.066: INFO: stderr: "" Jun 8 12:41:45.066: INFO: stdout: "update-demo-nautilus-5zk5w update-demo-nautilus-wpxrr " Jun 8 12:41:45.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5zk5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:45.184: INFO: stderr: "" Jun 8 12:41:45.184: INFO: stdout: "" Jun 8 12:41:45.184: INFO: update-demo-nautilus-5zk5w is created but not running Jun 8 12:41:50.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:50.282: INFO: stderr: "" Jun 8 12:41:50.282: INFO: stdout: "update-demo-nautilus-5zk5w update-demo-nautilus-wpxrr " Jun 8 12:41:50.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5zk5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:50.387: INFO: stderr: "" Jun 8 12:41:50.387: INFO: stdout: "true" Jun 8 12:41:50.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5zk5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:50.486: INFO: stderr: "" Jun 8 12:41:50.486: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 8 12:41:50.486: INFO: validating pod update-demo-nautilus-5zk5w Jun 8 12:41:50.491: INFO: got data: { "image": "nautilus.jpg" } Jun 8 12:41:50.491: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 8 12:41:50.491: INFO: update-demo-nautilus-5zk5w is verified up and running Jun 8 12:41:50.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wpxrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:50.581: INFO: stderr: "" Jun 8 12:41:50.581: INFO: stdout: "true" Jun 8 12:41:50.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wpxrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:50.679: INFO: stderr: "" Jun 8 12:41:50.679: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 8 12:41:50.679: INFO: validating pod update-demo-nautilus-wpxrr Jun 8 12:41:50.683: INFO: got data: { "image": "nautilus.jpg" } Jun 8 12:41:50.683: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 8 12:41:50.683: INFO: update-demo-nautilus-wpxrr is verified up and running STEP: using delete to clean up resources Jun 8 12:41:50.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:50.792: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 8 12:41:50.792: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 8 12:41:50.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-jwvb4' Jun 8 12:41:50.894: INFO: stderr: "No resources found.\n" Jun 8 12:41:50.894: INFO: stdout: "" Jun 8 12:41:50.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-jwvb4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 8 12:41:51.089: INFO: stderr: "" Jun 8 12:41:51.090: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:41:51.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jwvb4" for this suite. Jun 8 12:42:13.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:42:13.157: INFO: namespace: e2e-tests-kubectl-jwvb4, resource: bindings, ignored listing per whitelist Jun 8 12:42:13.214: INFO: namespace e2e-tests-kubectl-jwvb4 deletion completed in 22.120465743s • [SLOW TEST:49.672 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 8 12:42:13.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jun 8 12:42:13.332: INFO: Waiting up to 5m0s for pod "var-expansion-7a254ed9-a985-11ea-978f-0242ac110018" in namespace "e2e-tests-var-expansion-5btzh" to be "success or failure" Jun 8 12:42:13.335: INFO: Pod "var-expansion-7a254ed9-a985-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.003051ms Jun 8 12:42:15.429: INFO: Pod "var-expansion-7a254ed9-a985-11ea-978f-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096684822s Jun 8 12:42:17.432: INFO: Pod "var-expansion-7a254ed9-a985-11ea-978f-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099842406s STEP: Saw pod success Jun 8 12:42:17.432: INFO: Pod "var-expansion-7a254ed9-a985-11ea-978f-0242ac110018" satisfied condition "success or failure" Jun 8 12:42:17.435: INFO: Trying to get logs from node hunter-worker pod var-expansion-7a254ed9-a985-11ea-978f-0242ac110018 container dapi-container: STEP: delete the pod Jun 8 12:42:17.454: INFO: Waiting for pod var-expansion-7a254ed9-a985-11ea-978f-0242ac110018 to disappear Jun 8 12:42:17.458: INFO: Pod var-expansion-7a254ed9-a985-11ea-978f-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 8 12:42:17.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-5btzh" for this suite. Jun 8 12:42:23.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 8 12:42:23.581: INFO: namespace: e2e-tests-var-expansion-5btzh, resource: bindings, ignored listing per whitelist Jun 8 12:42:23.588: INFO: namespace e2e-tests-var-expansion-5btzh deletion completed in 6.127205503s • [SLOW TEST:10.373 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSJun 8 12:42:23.588: INFO: Running AfterSuite actions on all nodes Jun 8 12:42:23.588: INFO: Running AfterSuite actions on node 1 Jun 8 12:42:23.588: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6465.076 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS