I0501 14:53:52.859051 6 e2e.go:224] Starting e2e run "9296d13e-8bbb-11ea-acf7-0242ac110017" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588344832 - Will randomize all specs Will run 201 of 2164 specs May 1 14:53:53.059: INFO: >>> kubeConfig: /root/.kube/config May 1 14:53:53.063: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 1 14:53:53.078: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 1 14:53:53.111: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 1 14:53:53.111: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 1 14:53:53.111: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 1 14:53:53.119: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 1 14:53:53.119: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 1 14:53:53.119: INFO: e2e test version: v1.13.12 May 1 14:53:53.120: INFO: kube-apiserver version: v1.13.12 SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 14:53:53.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 1 14:53:53.321: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 14:53:53.330: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93373230-8bbb-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-pfvhd" to be "success or failure" May 1 14:53:53.348: INFO: Pod "downwardapi-volume-93373230-8bbb-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.053511ms May 1 14:53:55.352: INFO: Pod "downwardapi-volume-93373230-8bbb-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021635843s May 1 14:53:57.355: INFO: Pod "downwardapi-volume-93373230-8bbb-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.024707168s May 1 14:53:59.359: INFO: Pod "downwardapi-volume-93373230-8bbb-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028767003s STEP: Saw pod success May 1 14:53:59.359: INFO: Pod "downwardapi-volume-93373230-8bbb-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 14:53:59.362: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-93373230-8bbb-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 14:53:59.440: INFO: Waiting for pod downwardapi-volume-93373230-8bbb-11ea-acf7-0242ac110017 to disappear May 1 14:53:59.495: INFO: Pod downwardapi-volume-93373230-8bbb-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 14:53:59.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pfvhd" for this suite. May 1 14:54:05.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 14:54:05.527: INFO: namespace: e2e-tests-downward-api-pfvhd, resource: bindings, ignored listing per whitelist May 1 14:54:05.594: INFO: namespace e2e-tests-downward-api-pfvhd deletion completed in 6.095296716s • [SLOW TEST:12.474 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 14:54:05.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-9a98308d-8bbb-11ea-acf7-0242ac110017 STEP: Creating secret with name s-test-opt-upd-9a9830e9-8bbb-11ea-acf7-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9a98308d-8bbb-11ea-acf7-0242ac110017 STEP: Updating secret s-test-opt-upd-9a9830e9-8bbb-11ea-acf7-0242ac110017 STEP: Creating secret with name s-test-opt-create-9a983109-8bbb-11ea-acf7-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 14:55:55.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tl65t" for this suite. May 1 14:56:23.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 14:56:23.656: INFO: namespace: e2e-tests-projected-tl65t, resource: bindings, ignored listing per whitelist May 1 14:56:23.658: INFO: namespace e2e-tests-projected-tl65t deletion completed in 28.12242881s • [SLOW TEST:138.063 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 14:56:23.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-d8sm6 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-d8sm6 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-d8sm6 May 1 14:56:23.807: INFO: Found 0 stateful pods, waiting for 1 May 1 14:56:33.811: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 1 14:56:33.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d8sm6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 14:56:34.088: INFO: stderr: "I0501 14:56:33.955572 36 log.go:172] (0xc00015c790) (0xc0005cf360) Create stream\nI0501 14:56:33.955640 36 log.go:172] (0xc00015c790) (0xc0005cf360) Stream added, broadcasting: 1\nI0501 14:56:33.958344 36 log.go:172] (0xc00015c790) Reply frame received for 1\nI0501 14:56:33.958379 36 log.go:172] (0xc00015c790) (0xc0005cf400) Create stream\nI0501 14:56:33.958390 36 log.go:172] (0xc00015c790) (0xc0005cf400) Stream added, broadcasting: 3\nI0501 14:56:33.959162 36 log.go:172] (0xc00015c790) Reply frame received for 3\nI0501 14:56:33.959191 36 log.go:172] (0xc00015c790) (0xc0006ba000) Create stream\nI0501 14:56:33.959200 36 log.go:172] (0xc00015c790) (0xc0006ba000) Stream added, broadcasting: 5\nI0501 14:56:33.960011 36 log.go:172] (0xc00015c790) Reply frame received for 5\nI0501 14:56:34.078314 36 log.go:172] (0xc00015c790) Data frame received for 3\nI0501 14:56:34.078348 36 log.go:172] (0xc0005cf400) (3) Data frame handling\nI0501 14:56:34.078371 36 log.go:172] (0xc0005cf400) (3) Data frame sent\nI0501 14:56:34.082103 36 log.go:172] (0xc00015c790) Data frame received for 3\nI0501 14:56:34.082125 36 log.go:172] (0xc0005cf400) (3) Data frame handling\nI0501 14:56:34.082150 36 log.go:172] (0xc00015c790) Data frame received for 5\nI0501 14:56:34.082159 36 log.go:172] (0xc0006ba000) (5) Data frame handling\nI0501 14:56:34.083852 36 log.go:172] (0xc00015c790) Data frame received for 1\nI0501 14:56:34.083891 36 log.go:172] (0xc0005cf360) (1) Data frame handling\nI0501 14:56:34.083939 36 log.go:172] (0xc0005cf360) (1) Data frame sent\nI0501 14:56:34.083990 36 log.go:172] (0xc00015c790) (0xc0005cf360) Stream removed, broadcasting: 1\nI0501 14:56:34.084015 36 log.go:172] (0xc00015c790) Go away received\nI0501 14:56:34.084237 36 log.go:172] (0xc00015c790) (0xc0005cf360) Stream removed, broadcasting: 1\nI0501 14:56:34.084271 36 log.go:172] (0xc00015c790) (0xc0005cf400) Stream removed, broadcasting: 3\nI0501 14:56:34.084287 36 log.go:172] (0xc00015c790) (0xc0006ba000) Stream removed, broadcasting: 5\n" May 1 14:56:34.088: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 14:56:34.088: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 14:56:34.092: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 1 14:56:44.097: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 14:56:44.097: INFO: Waiting for statefulset status.replicas updated to 0 May 1 14:56:44.349: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999481s May 1 14:56:45.353: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.759325527s May 1 14:56:46.358: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.754900377s May 1 14:56:47.363: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.75016449s May 1 14:56:48.368: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.745452185s May 1 14:56:49.373: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.740540457s May 1 14:56:50.377: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.735060089s May 1 14:56:51.382: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.730826089s May 1 14:56:52.387: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.726256142s May 1 14:56:53.392: INFO: Verifying statefulset ss doesn't scale past 1 for another 721.571044ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-d8sm6 May 1 14:56:54.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d8sm6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 14:56:54.612: INFO: stderr: "I0501 14:56:54.527561 59 log.go:172] (0xc0008282c0) (0xc000738640) Create stream\nI0501 14:56:54.527625 59 log.go:172] (0xc0008282c0) (0xc000738640) Stream added, broadcasting: 1\nI0501 14:56:54.530245 59 log.go:172] (0xc0008282c0) Reply frame received for 1\nI0501 14:56:54.530282 59 log.go:172] (0xc0008282c0) (0xc00061ec80) Create stream\nI0501 14:56:54.530295 59 log.go:172] (0xc0008282c0) (0xc00061ec80) Stream added, broadcasting: 3\nI0501 14:56:54.531214 59 log.go:172] (0xc0008282c0) Reply frame received for 3\nI0501 14:56:54.531253 59 log.go:172] (0xc0008282c0) (0xc0007386e0) Create stream\nI0501 14:56:54.531266 59 log.go:172] (0xc0008282c0) (0xc0007386e0) Stream added, broadcasting: 5\nI0501 14:56:54.532374 59 log.go:172] (0xc0008282c0) Reply frame received for 5\nI0501 14:56:54.606749 59 log.go:172] (0xc0008282c0) Data frame received for 5\nI0501 14:56:54.606788 59 log.go:172] (0xc0007386e0) (5) Data frame handling\nI0501 14:56:54.606839 59 log.go:172] (0xc0008282c0) Data frame received for 3\nI0501 14:56:54.606875 59 log.go:172] (0xc00061ec80) (3) Data frame handling\nI0501 14:56:54.606890 59 log.go:172] (0xc00061ec80) (3) Data frame sent\nI0501 14:56:54.606900 59 log.go:172] (0xc0008282c0) Data frame received for 3\nI0501 14:56:54.606909 59 log.go:172] (0xc00061ec80) (3) Data frame handling\nI0501 14:56:54.608345 59 log.go:172] (0xc0008282c0) Data frame received for 1\nI0501 14:56:54.608362 59 log.go:172] (0xc000738640) (1) Data frame handling\nI0501 14:56:54.608369 59 log.go:172] (0xc000738640) (1) Data frame sent\nI0501 14:56:54.608386 59 log.go:172] (0xc0008282c0) (0xc000738640) Stream removed, broadcasting: 1\nI0501 14:56:54.608398 59 log.go:172] (0xc0008282c0) Go away received\nI0501 14:56:54.608660 59 log.go:172] (0xc0008282c0) (0xc000738640) Stream removed, broadcasting: 1\nI0501 14:56:54.608686 59 log.go:172] (0xc0008282c0) (0xc00061ec80) Stream removed, broadcasting: 3\nI0501 14:56:54.608698 59 log.go:172] (0xc0008282c0) (0xc0007386e0) Stream removed, broadcasting: 5\n" May 1 14:56:54.612: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 14:56:54.612: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 14:56:54.626: INFO: Found 1 stateful pods, waiting for 3 May 1 14:57:04.631: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 1 14:57:04.631: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 1 14:57:04.631: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 1 14:57:04.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d8sm6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 14:57:04.870: INFO: stderr: "I0501 14:57:04.770700 81 log.go:172] (0xc0008322c0) (0xc00072c640) Create stream\nI0501 14:57:04.770759 81 log.go:172] (0xc0008322c0) (0xc00072c640) Stream added, broadcasting: 1\nI0501 14:57:04.772967 81 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0501 14:57:04.773002 81 log.go:172] (0xc0008322c0) (0xc00072c6e0) Create stream\nI0501 14:57:04.773013 81 log.go:172] (0xc0008322c0) (0xc00072c6e0) Stream added, broadcasting: 3\nI0501 14:57:04.773874 81 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0501 14:57:04.773904 81 log.go:172] (0xc0008322c0) (0xc0005a2dc0) Create stream\nI0501 14:57:04.773914 81 log.go:172] (0xc0008322c0) (0xc0005a2dc0) Stream added, broadcasting: 5\nI0501 14:57:04.774670 81 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0501 14:57:04.865511 81 log.go:172] (0xc0008322c0) Data frame received for 5\nI0501 14:57:04.865561 81 log.go:172] (0xc0005a2dc0) (5) Data frame handling\nI0501 14:57:04.865589 81 log.go:172] (0xc0008322c0) Data frame received for 3\nI0501 14:57:04.865601 81 log.go:172] (0xc00072c6e0) (3) Data frame handling\nI0501 14:57:04.865616 81 log.go:172] (0xc00072c6e0) (3) Data frame sent\nI0501 14:57:04.865629 81 log.go:172] (0xc0008322c0) Data frame received for 3\nI0501 14:57:04.865649 81 log.go:172] (0xc00072c6e0) (3) Data frame handling\nI0501 14:57:04.866592 81 log.go:172] (0xc0008322c0) Data frame received for 1\nI0501 14:57:04.866610 81 log.go:172] (0xc00072c640) (1) Data frame handling\nI0501 14:57:04.866623 81 log.go:172] (0xc00072c640) (1) Data frame sent\nI0501 14:57:04.866636 81 log.go:172] (0xc0008322c0) (0xc00072c640) Stream removed, broadcasting: 1\nI0501 14:57:04.866649 81 log.go:172] (0xc0008322c0) Go away received\nI0501 14:57:04.866862 81 log.go:172] (0xc0008322c0) (0xc00072c640) Stream removed, broadcasting: 1\nI0501 14:57:04.866880 81 log.go:172] (0xc0008322c0) (0xc00072c6e0) Stream removed, broadcasting: 3\nI0501 14:57:04.866893 81 log.go:172] (0xc0008322c0) (0xc0005a2dc0) Stream removed, broadcasting: 5\n" May 1 14:57:04.870: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 14:57:04.870: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 14:57:04.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d8sm6 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 14:57:05.110: INFO: stderr: "I0501 14:57:04.998887 103 log.go:172] (0xc0003a02c0) (0xc00020d540) Create stream\nI0501 14:57:04.998966 103 log.go:172] (0xc0003a02c0) (0xc00020d540) Stream added, broadcasting: 1\nI0501 14:57:05.001501 103 log.go:172] (0xc0003a02c0) Reply frame received for 1\nI0501 14:57:05.001589 103 log.go:172] (0xc0003a02c0) (0xc0008d2000) Create stream\nI0501 14:57:05.001621 103 log.go:172] (0xc0003a02c0) (0xc0008d2000) Stream added, broadcasting: 3\nI0501 14:57:05.002606 103 log.go:172] (0xc0003a02c0) Reply frame received for 3\nI0501 14:57:05.002671 103 log.go:172] (0xc0003a02c0) (0xc00020d5e0) Create stream\nI0501 14:57:05.002700 103 log.go:172] (0xc0003a02c0) (0xc00020d5e0) Stream added, broadcasting: 5\nI0501 14:57:05.003633 103 log.go:172] (0xc0003a02c0) Reply frame received for 5\nI0501 14:57:05.104129 103 log.go:172] (0xc0003a02c0) Data frame received for 3\nI0501 14:57:05.104156 103 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0501 14:57:05.104187 103 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0501 14:57:05.105008 103 log.go:172] (0xc0003a02c0) Data frame received for 3\nI0501 14:57:05.105058 103 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0501 14:57:05.105095 103 log.go:172] (0xc0003a02c0) Data frame received for 5\nI0501 14:57:05.105327 103 log.go:172] (0xc00020d5e0) (5) Data frame handling\nI0501 14:57:05.106749 103 log.go:172] (0xc0003a02c0) Data frame received for 1\nI0501 14:57:05.106770 103 log.go:172] (0xc00020d540) (1) Data frame handling\nI0501 14:57:05.106789 103 log.go:172] (0xc00020d540) (1) Data frame sent\nI0501 14:57:05.106816 103 log.go:172] (0xc0003a02c0) (0xc00020d540) Stream removed, broadcasting: 1\nI0501 14:57:05.106955 103 log.go:172] (0xc0003a02c0) Go away received\nI0501 14:57:05.107005 103 log.go:172] (0xc0003a02c0) (0xc00020d540) Stream removed, broadcasting: 1\nI0501 14:57:05.107041 103 log.go:172] (0xc0003a02c0) (0xc0008d2000) Stream removed, broadcasting: 3\nI0501 14:57:05.107057 103 log.go:172] (0xc0003a02c0) (0xc00020d5e0) Stream removed, broadcasting: 5\n" May 1 14:57:05.110: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 14:57:05.110: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 14:57:05.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d8sm6 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 14:57:05.444: INFO: stderr: "I0501 14:57:05.340679 126 log.go:172] (0xc00015c580) (0xc00071e5a0) Create stream\nI0501 14:57:05.340808 126 log.go:172] (0xc00015c580) (0xc00071e5a0) Stream added, broadcasting: 1\nI0501 14:57:05.343711 126 log.go:172] (0xc00015c580) Reply frame received for 1\nI0501 14:57:05.343763 126 log.go:172] (0xc00015c580) (0xc0005ecc80) Create stream\nI0501 14:57:05.343775 126 log.go:172] (0xc00015c580) (0xc0005ecc80) Stream added, broadcasting: 3\nI0501 14:57:05.348715 126 log.go:172] (0xc00015c580) Reply frame received for 3\nI0501 14:57:05.348765 126 log.go:172] (0xc00015c580) (0xc0006c0000) Create stream\nI0501 14:57:05.348778 126 log.go:172] (0xc00015c580) (0xc0006c0000) Stream added, broadcasting: 5\nI0501 14:57:05.350101 126 log.go:172] (0xc00015c580) Reply frame received for 5\nI0501 14:57:05.438312 126 log.go:172] (0xc00015c580) Data frame received for 3\nI0501 14:57:05.438354 126 log.go:172] (0xc0005ecc80) (3) Data frame handling\nI0501 14:57:05.438371 126 log.go:172] (0xc0005ecc80) (3) Data frame sent\nI0501 14:57:05.438449 126 log.go:172] (0xc00015c580) Data frame received for 3\nI0501 14:57:05.438473 126 log.go:172] (0xc0005ecc80) (3) Data frame handling\nI0501 14:57:05.438491 126 log.go:172] (0xc00015c580) Data frame received for 5\nI0501 14:57:05.438508 126 log.go:172] (0xc0006c0000) (5) Data frame handling\nI0501 14:57:05.440323 126 log.go:172] (0xc00015c580) Data frame received for 1\nI0501 14:57:05.440347 126 log.go:172] (0xc00071e5a0) (1) Data frame handling\nI0501 14:57:05.440364 126 log.go:172] (0xc00071e5a0) (1) Data frame sent\nI0501 14:57:05.440382 126 log.go:172] (0xc00015c580) (0xc00071e5a0) Stream removed, broadcasting: 1\nI0501 14:57:05.440415 126 log.go:172] (0xc00015c580) Go away received\nI0501 14:57:05.440587 126 log.go:172] (0xc00015c580) (0xc00071e5a0) Stream removed, broadcasting: 1\nI0501 14:57:05.440614 126 log.go:172] (0xc00015c580) (0xc0005ecc80) Stream removed, broadcasting: 3\nI0501 14:57:05.440628 126 log.go:172] (0xc00015c580) (0xc0006c0000) Stream removed, broadcasting: 5\n" May 1 14:57:05.444: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 14:57:05.444: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 14:57:05.444: INFO: Waiting for statefulset status.replicas updated to 0 May 1 14:57:05.448: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 1 14:57:15.503: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 14:57:15.503: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 1 14:57:15.503: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 1 14:57:15.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999645s May 1 14:57:16.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.962259361s May 1 14:57:17.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.956802699s May 1 14:57:18.571: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.950966859s May 1 14:57:19.734: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.938233901s May 1 14:57:20.750: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.775466555s May 1 14:57:21.756: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.758688538s May 1 14:57:22.760: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.752770837s May 1 14:57:23.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.749181835s May 1 14:57:24.847: INFO: Verifying statefulset ss doesn't scale past 3 for another 743.112931ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-d8sm6 May 1 14:57:25.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d8sm6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 14:57:26.171: INFO: stderr: "I0501 14:57:26.109565 149 log.go:172] (0xc0007a00b0) (0xc000366820) Create stream\nI0501 14:57:26.109655 149 log.go:172] (0xc0007a00b0) (0xc000366820) Stream added, broadcasting: 1\nI0501 14:57:26.111849 149 log.go:172] (0xc0007a00b0) Reply frame received for 1\nI0501 14:57:26.111892 149 log.go:172] (0xc0007a00b0) (0xc0006a0000) Create stream\nI0501 14:57:26.111903 149 log.go:172] (0xc0007a00b0) (0xc0006a0000) Stream added, broadcasting: 3\nI0501 14:57:26.112741 149 log.go:172] (0xc0007a00b0) Reply frame received for 3\nI0501 14:57:26.112775 149 log.go:172] (0xc0007a00b0) (0xc00037ad20) Create stream\nI0501 14:57:26.112791 149 log.go:172] (0xc0007a00b0) (0xc00037ad20) Stream added, broadcasting: 5\nI0501 14:57:26.113757 149 log.go:172] (0xc0007a00b0) Reply frame received for 5\nI0501 14:57:26.165608 149 log.go:172] (0xc0007a00b0) Data frame received for 3\nI0501 14:57:26.165646 149 log.go:172] (0xc0006a0000) (3) Data frame handling\nI0501 14:57:26.165668 149 log.go:172] (0xc0006a0000) (3) Data frame sent\nI0501 14:57:26.165679 149 log.go:172] (0xc0007a00b0) Data frame received for 3\nI0501 14:57:26.165690 149 log.go:172] (0xc0006a0000) (3) Data frame handling\nI0501 14:57:26.165707 149 log.go:172] (0xc0007a00b0) Data frame received for 5\nI0501 14:57:26.165720 149 log.go:172] (0xc00037ad20) (5) Data frame handling\nI0501 14:57:26.167063 149 log.go:172] (0xc0007a00b0) Data frame received for 1\nI0501 14:57:26.167093 149 log.go:172] (0xc000366820) (1) Data frame handling\nI0501 14:57:26.167109 149 log.go:172] (0xc000366820) (1) Data frame sent\nI0501 14:57:26.167122 149 log.go:172] (0xc0007a00b0) (0xc000366820) Stream removed, broadcasting: 1\nI0501 14:57:26.167138 149 log.go:172] (0xc0007a00b0) Go away received\nI0501 14:57:26.167399 149 log.go:172] (0xc0007a00b0) (0xc000366820) Stream removed, broadcasting: 1\nI0501 14:57:26.167418 149 log.go:172] (0xc0007a00b0) (0xc0006a0000) Stream removed, broadcasting: 3\nI0501 14:57:26.167427 149 log.go:172] (0xc0007a00b0) (0xc00037ad20) Stream removed, broadcasting: 5\n" May 1 14:57:26.171: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 14:57:26.171: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 14:57:26.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d8sm6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 14:57:26.400: INFO: stderr: "I0501 14:57:26.331031 171 log.go:172] (0xc000138790) (0xc000738640) Create stream\nI0501 14:57:26.331084 171 log.go:172] (0xc000138790) (0xc000738640) Stream added, broadcasting: 1\nI0501 14:57:26.332719 171 log.go:172] (0xc000138790) Reply frame received for 1\nI0501 14:57:26.332778 171 log.go:172] (0xc000138790) (0xc000606c80) Create stream\nI0501 14:57:26.332794 171 log.go:172] (0xc000138790) (0xc000606c80) Stream added, broadcasting: 3\nI0501 14:57:26.333649 171 log.go:172] (0xc000138790) Reply frame received for 3\nI0501 14:57:26.333677 171 log.go:172] (0xc000138790) (0xc0007386e0) Create stream\nI0501 14:57:26.333684 171 log.go:172] (0xc000138790) (0xc0007386e0) Stream added, broadcasting: 5\nI0501 14:57:26.334264 171 log.go:172] (0xc000138790) Reply frame received for 5\nI0501 14:57:26.394819 171 log.go:172] (0xc000138790) Data frame received for 5\nI0501 14:57:26.394843 171 log.go:172] (0xc0007386e0) (5) Data frame handling\nI0501 14:57:26.394867 171 log.go:172] (0xc000138790) Data frame received for 3\nI0501 14:57:26.394887 171 log.go:172] (0xc000606c80) (3) Data frame handling\nI0501 14:57:26.394905 171 log.go:172] (0xc000606c80) (3) Data frame sent\nI0501 14:57:26.394913 171 log.go:172] (0xc000138790) Data frame received for 3\nI0501 14:57:26.394921 171 log.go:172] (0xc000606c80) (3) Data frame handling\nI0501 14:57:26.396191 171 log.go:172] (0xc000138790) Data frame received for 1\nI0501 14:57:26.396207 171 log.go:172] (0xc000738640) (1) Data frame handling\nI0501 14:57:26.396283 171 log.go:172] (0xc000738640) (1) Data frame sent\nI0501 14:57:26.396302 171 log.go:172] (0xc000138790) (0xc000738640) Stream removed, broadcasting: 1\nI0501 14:57:26.396320 171 log.go:172] (0xc000138790) Go away received\nI0501 14:57:26.396487 171 log.go:172] (0xc000138790) (0xc000738640) Stream removed, broadcasting: 1\nI0501 14:57:26.396506 171 log.go:172] (0xc000138790) (0xc000606c80) Stream removed, broadcasting: 3\nI0501 14:57:26.396516 171 log.go:172] (0xc000138790) (0xc0007386e0) Stream removed, broadcasting: 5\n" May 1 14:57:26.400: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 14:57:26.400: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 14:57:26.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d8sm6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 14:57:26.581: INFO: stderr: "I0501 14:57:26.514352 193 log.go:172] (0xc000770160) (0xc0006de640) Create stream\nI0501 14:57:26.514401 193 log.go:172] (0xc000770160) (0xc0006de640) Stream added, broadcasting: 1\nI0501 14:57:26.516153 193 log.go:172] (0xc000770160) Reply frame received for 1\nI0501 14:57:26.516191 193 log.go:172] (0xc000770160) (0xc000350e60) Create stream\nI0501 14:57:26.516201 193 log.go:172] (0xc000770160) (0xc000350e60) Stream added, broadcasting: 3\nI0501 14:57:26.516811 193 log.go:172] (0xc000770160) Reply frame received for 3\nI0501 14:57:26.516867 193 log.go:172] (0xc000770160) (0xc0006de6e0) Create stream\nI0501 14:57:26.516883 193 log.go:172] (0xc000770160) (0xc0006de6e0) Stream added, broadcasting: 5\nI0501 14:57:26.517738 193 log.go:172] (0xc000770160) Reply frame received for 5\nI0501 14:57:26.575487 193 log.go:172] (0xc000770160) Data frame received for 5\nI0501 14:57:26.575547 193 log.go:172] (0xc0006de6e0) (5) Data frame handling\nI0501 14:57:26.575593 193 log.go:172] (0xc000770160) Data frame received for 3\nI0501 14:57:26.575618 193 log.go:172] (0xc000350e60) (3) Data frame handling\nI0501 14:57:26.575645 193 log.go:172] (0xc000350e60) (3) Data frame sent\nI0501 14:57:26.575688 193 log.go:172] (0xc000770160) Data frame received for 3\nI0501 14:57:26.575714 193 log.go:172] (0xc000350e60) (3) Data frame handling\nI0501 14:57:26.576818 193 log.go:172] (0xc000770160) Data frame received for 1\nI0501 14:57:26.576865 193 log.go:172] (0xc0006de640) (1) Data frame handling\nI0501 14:57:26.576891 193 log.go:172] (0xc0006de640) (1) Data frame sent\nI0501 14:57:26.576921 193 log.go:172] (0xc000770160) (0xc0006de640) Stream removed, broadcasting: 1\nI0501 14:57:26.576948 193 log.go:172] (0xc000770160) Go away received\nI0501 14:57:26.577308 193 log.go:172] (0xc000770160) (0xc0006de640) Stream removed, broadcasting: 1\nI0501 14:57:26.577339 193 log.go:172] (0xc000770160) (0xc000350e60) Stream removed, broadcasting: 3\nI0501 14:57:26.577351 193 log.go:172] (0xc000770160) (0xc0006de6e0) Stream removed, broadcasting: 5\n" May 1 14:57:26.581: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 14:57:26.581: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 14:57:26.581: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 14:57:56.595: INFO: Deleting all statefulset in ns e2e-tests-statefulset-d8sm6 May 1 14:57:56.599: INFO: Scaling statefulset ss to 0 May 1 14:57:56.608: INFO: Waiting for statefulset status.replicas updated to 0 May 1 14:57:56.611: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 14:57:56.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-d8sm6" for this suite. May 1 14:58:04.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 14:58:04.807: INFO: namespace: e2e-tests-statefulset-d8sm6, resource: bindings, ignored listing per whitelist May 1 14:58:04.807: INFO: namespace e2e-tests-statefulset-d8sm6 deletion completed in 8.151520045s • [SLOW TEST:101.149 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 14:58:04.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-fp4hn.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fp4hn.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fp4hn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-fp4hn.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fp4hn.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fp4hn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 14:58:13.188: INFO: DNS probes using e2e-tests-dns-fp4hn/dns-test-2930b9d0-8bbc-11ea-acf7-0242ac110017 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 14:58:13.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-fp4hn" for this suite. May 1 14:58:19.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 14:58:19.944: INFO: namespace: e2e-tests-dns-fp4hn, resource: bindings, ignored listing per whitelist May 1 14:58:19.998: INFO: namespace e2e-tests-dns-fp4hn deletion completed in 6.379304079s • [SLOW TEST:15.192 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 14:58:19.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 1 14:58:20.136: INFO: Waiting up to 5m0s for pod "pod-323d1491-8bbc-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-sl5n8" to be "success or failure" May 1 14:58:20.141: INFO: Pod "pod-323d1491-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.939672ms May 1 14:58:22.144: INFO: Pod "pod-323d1491-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008269737s May 1 14:58:24.148: INFO: Pod "pod-323d1491-8bbc-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012337979s STEP: Saw pod success May 1 14:58:24.148: INFO: Pod "pod-323d1491-8bbc-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 14:58:24.151: INFO: Trying to get logs from node hunter-worker pod pod-323d1491-8bbc-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 14:58:24.186: INFO: Waiting for pod pod-323d1491-8bbc-11ea-acf7-0242ac110017 to disappear May 1 14:58:24.194: INFO: Pod pod-323d1491-8bbc-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 14:58:24.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sl5n8" for this suite. May 1 14:58:30.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 14:58:30.247: INFO: namespace: e2e-tests-emptydir-sl5n8, resource: bindings, ignored listing per whitelist May 1 14:58:30.291: INFO: namespace e2e-tests-emptydir-sl5n8 deletion completed in 6.092778977s • [SLOW TEST:10.292 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 14:58:30.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 1 14:58:30.683: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-hrzz6,SelfLink:/api/v1/namespaces/e2e-tests-watch-hrzz6/configmaps/e2e-watch-test-label-changed,UID:3864a76e-8bbc-11ea-99e8-0242ac110002,ResourceVersion:8186882,Generation:0,CreationTimestamp:2020-05-01 14:58:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 14:58:30.683: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-hrzz6,SelfLink:/api/v1/namespaces/e2e-tests-watch-hrzz6/configmaps/e2e-watch-test-label-changed,UID:3864a76e-8bbc-11ea-99e8-0242ac110002,ResourceVersion:8186883,Generation:0,CreationTimestamp:2020-05-01 14:58:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 1 14:58:30.683: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-hrzz6,SelfLink:/api/v1/namespaces/e2e-tests-watch-hrzz6/configmaps/e2e-watch-test-label-changed,UID:3864a76e-8bbc-11ea-99e8-0242ac110002,ResourceVersion:8186885,Generation:0,CreationTimestamp:2020-05-01 14:58:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 1 14:58:40.796: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-hrzz6,SelfLink:/api/v1/namespaces/e2e-tests-watch-hrzz6/configmaps/e2e-watch-test-label-changed,UID:3864a76e-8bbc-11ea-99e8-0242ac110002,ResourceVersion:8186906,Generation:0,CreationTimestamp:2020-05-01 14:58:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 14:58:40.796: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-hrzz6,SelfLink:/api/v1/namespaces/e2e-tests-watch-hrzz6/configmaps/e2e-watch-test-label-changed,UID:3864a76e-8bbc-11ea-99e8-0242ac110002,ResourceVersion:8186907,Generation:0,CreationTimestamp:2020-05-01 14:58:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 1 14:58:40.796: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-hrzz6,SelfLink:/api/v1/namespaces/e2e-tests-watch-hrzz6/configmaps/e2e-watch-test-label-changed,UID:3864a76e-8bbc-11ea-99e8-0242ac110002,ResourceVersion:8186908,Generation:0,CreationTimestamp:2020-05-01 14:58:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 14:58:40.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-hrzz6" for this suite. May 1 14:58:46.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 14:58:46.943: INFO: namespace: e2e-tests-watch-hrzz6, resource: bindings, ignored listing per whitelist May 1 14:58:46.998: INFO: namespace e2e-tests-watch-hrzz6 deletion completed in 6.173667649s • [SLOW TEST:16.707 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 14:58:46.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 14:58:47.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-vqg79' May 1 14:58:49.719: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 14:58:49.719: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 1 14:58:49.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-vqg79' May 1 14:58:49.863: INFO: stderr: "" May 1 14:58:49.863: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 14:58:49.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vqg79" for this suite. May 1 14:59:11.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 14:59:11.983: INFO: namespace: e2e-tests-kubectl-vqg79, resource: bindings, ignored listing per whitelist May 1 14:59:12.033: INFO: namespace e2e-tests-kubectl-vqg79 deletion completed in 22.167117326s • [SLOW TEST:25.035 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 14:59:12.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-hmkg9 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 14:59:12.167: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 14:59:42.451: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.204:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-hmkg9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 14:59:42.451: INFO: >>> kubeConfig: /root/.kube/config I0501 14:59:42.482766 6 log.go:172] (0xc0009256b0) (0xc001ba35e0) Create stream I0501 14:59:42.482794 6 log.go:172] (0xc0009256b0) (0xc001ba35e0) Stream added, broadcasting: 1 I0501 14:59:42.484822 6 log.go:172] (0xc0009256b0) Reply frame received for 1 I0501 14:59:42.484877 6 log.go:172] (0xc0009256b0) (0xc001acfc20) Create stream I0501 14:59:42.484893 6 log.go:172] (0xc0009256b0) (0xc001acfc20) Stream added, broadcasting: 3 I0501 14:59:42.486225 6 log.go:172] (0xc0009256b0) Reply frame received for 3 I0501 14:59:42.486273 6 log.go:172] (0xc0009256b0) (0xc001ba3680) Create stream I0501 14:59:42.486290 6 log.go:172] (0xc0009256b0) (0xc001ba3680) Stream added, broadcasting: 5 I0501 14:59:42.487355 6 log.go:172] (0xc0009256b0) Reply frame received for 5 I0501 14:59:42.616402 6 log.go:172] (0xc0009256b0) Data frame received for 3 I0501 14:59:42.616429 6 log.go:172] (0xc001acfc20) (3) Data frame handling I0501 14:59:42.616444 6 log.go:172] (0xc001acfc20) (3) Data frame sent I0501 14:59:42.616452 6 log.go:172] (0xc0009256b0) Data frame received for 3 I0501 14:59:42.616460 6 log.go:172] (0xc001acfc20) (3) Data frame handling I0501 14:59:42.616511 6 log.go:172] (0xc0009256b0) Data frame received for 5 I0501 14:59:42.616526 6 log.go:172] (0xc001ba3680) (5) Data frame handling I0501 14:59:42.618277 6 log.go:172] (0xc0009256b0) Data frame received for 1 I0501 14:59:42.618293 6 log.go:172] (0xc001ba35e0) (1) Data frame handling I0501 14:59:42.618307 6 log.go:172] (0xc001ba35e0) (1) Data frame sent I0501 14:59:42.618417 6 log.go:172] (0xc0009256b0) (0xc001ba35e0) Stream removed, broadcasting: 1 I0501 14:59:42.618448 6 log.go:172] (0xc0009256b0) Go away received I0501 14:59:42.618644 6 log.go:172] (0xc0009256b0) (0xc001ba35e0) Stream removed, broadcasting: 1 I0501 14:59:42.618666 6 log.go:172] (0xc0009256b0) (0xc001acfc20) Stream removed, broadcasting: 3 I0501 14:59:42.618685 6 log.go:172] (0xc0009256b0) (0xc001ba3680) Stream removed, broadcasting: 5 May 1 14:59:42.618: INFO: Found all expected endpoints: [netserver-0] May 1 14:59:42.621: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.238:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-hmkg9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 14:59:42.621: INFO: >>> kubeConfig: /root/.kube/config I0501 14:59:42.653488 6 log.go:172] (0xc000925ce0) (0xc001ba3a40) Create stream I0501 14:59:42.653520 6 log.go:172] (0xc000925ce0) (0xc001ba3a40) Stream added, broadcasting: 1 I0501 14:59:42.655470 6 log.go:172] (0xc000925ce0) Reply frame received for 1 I0501 14:59:42.655517 6 log.go:172] (0xc000925ce0) (0xc001acfcc0) Create stream I0501 14:59:42.655531 6 log.go:172] (0xc000925ce0) (0xc001acfcc0) Stream added, broadcasting: 3 I0501 14:59:42.656428 6 log.go:172] (0xc000925ce0) Reply frame received for 3 I0501 14:59:42.656462 6 log.go:172] (0xc000925ce0) (0xc001acfd60) Create stream I0501 14:59:42.656476 6 log.go:172] (0xc000925ce0) (0xc001acfd60) Stream added, broadcasting: 5 I0501 14:59:42.657529 6 log.go:172] (0xc000925ce0) Reply frame received for 5 I0501 14:59:42.730208 6 log.go:172] (0xc000925ce0) Data frame received for 3 I0501 14:59:42.730253 6 log.go:172] (0xc001acfcc0) (3) Data frame handling I0501 14:59:42.730270 6 log.go:172] (0xc001acfcc0) (3) Data frame sent I0501 14:59:42.730293 6 log.go:172] (0xc000925ce0) Data frame received for 5 I0501 14:59:42.730303 6 log.go:172] (0xc001acfd60) (5) Data frame handling I0501 14:59:42.730919 6 log.go:172] (0xc000925ce0) Data frame received for 3 I0501 14:59:42.730945 6 log.go:172] (0xc001acfcc0) (3) Data frame handling I0501 14:59:42.731728 6 log.go:172] (0xc000925ce0) Data frame received for 1 I0501 14:59:42.731749 6 log.go:172] (0xc001ba3a40) (1) Data frame handling I0501 14:59:42.731761 6 log.go:172] (0xc001ba3a40) (1) Data frame sent I0501 14:59:42.731774 6 log.go:172] (0xc000925ce0) (0xc001ba3a40) Stream removed, broadcasting: 1 I0501 14:59:42.731847 6 log.go:172] (0xc000925ce0) (0xc001ba3a40) Stream removed, broadcasting: 1 I0501 14:59:42.731855 6 log.go:172] (0xc000925ce0) (0xc001acfcc0) Stream removed, broadcasting: 3 I0501 14:59:42.731860 6 log.go:172] (0xc000925ce0) (0xc001acfd60) Stream removed, broadcasting: 5 May 1 14:59:42.731: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 14:59:42.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0501 14:59:42.732209 6 log.go:172] (0xc000925ce0) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-hmkg9" for this suite. May 1 15:00:04.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:00:04.932: INFO: namespace: e2e-tests-pod-network-test-hmkg9, resource: bindings, ignored listing per whitelist May 1 15:00:05.070: INFO: namespace e2e-tests-pod-network-test-hmkg9 deletion completed in 22.33494254s • [SLOW TEST:53.037 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:00:05.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 15:00:16.391: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:16.417: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:18.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:18.421: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:20.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:20.421: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:22.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:22.422: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:24.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:24.422: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:26.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:26.422: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:28.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:28.423: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:30.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:30.422: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:32.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:32.422: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:34.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:34.421: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:00:36.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:00:36.421: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:00:36.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-rfwt4" for this suite. May 1 15:00:58.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:00:58.492: INFO: namespace: e2e-tests-container-lifecycle-hook-rfwt4, resource: bindings, ignored listing per whitelist May 1 15:00:58.519: INFO: namespace e2e-tests-container-lifecycle-hook-rfwt4 deletion completed in 22.08867226s • [SLOW TEST:53.448 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:00:58.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 1 15:00:58.717: INFO: Waiting up to 5m0s for pod "pod-90c1db0a-8bbc-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-bgxdt" to be "success or failure" May 1 15:00:58.738: INFO: Pod "pod-90c1db0a-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.387255ms May 1 15:01:00.742: INFO: Pod "pod-90c1db0a-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025568237s May 1 15:01:02.746: INFO: Pod "pod-90c1db0a-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029073141s May 1 15:01:04.749: INFO: Pod "pod-90c1db0a-8bbc-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032581699s STEP: Saw pod success May 1 15:01:04.750: INFO: Pod "pod-90c1db0a-8bbc-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:01:04.752: INFO: Trying to get logs from node hunter-worker pod pod-90c1db0a-8bbc-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 15:01:04.800: INFO: Waiting for pod pod-90c1db0a-8bbc-11ea-acf7-0242ac110017 to disappear May 1 15:01:04.803: INFO: Pod pod-90c1db0a-8bbc-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:01:04.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bgxdt" for this suite. May 1 15:01:10.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:01:11.031: INFO: namespace: e2e-tests-emptydir-bgxdt, resource: bindings, ignored listing per whitelist May 1 15:01:11.044: INFO: namespace e2e-tests-emptydir-bgxdt deletion completed in 6.092990398s • [SLOW TEST:12.525 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:01:11.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:01:11.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-982f7e95-8bbc-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-668v8" to be "success or failure" May 1 15:01:11.228: INFO: Pod "downwardapi-volume-982f7e95-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 39.623126ms May 1 15:01:13.231: INFO: Pod "downwardapi-volume-982f7e95-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042695796s May 1 15:01:15.234: INFO: Pod "downwardapi-volume-982f7e95-8bbc-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046025358s STEP: Saw pod success May 1 15:01:15.234: INFO: Pod "downwardapi-volume-982f7e95-8bbc-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:01:15.237: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-982f7e95-8bbc-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:01:15.564: INFO: Waiting for pod downwardapi-volume-982f7e95-8bbc-11ea-acf7-0242ac110017 to disappear May 1 15:01:15.647: INFO: Pod downwardapi-volume-982f7e95-8bbc-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:01:15.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-668v8" for this suite. May 1 15:01:21.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:01:21.804: INFO: namespace: e2e-tests-downward-api-668v8, resource: bindings, ignored listing per whitelist May 1 15:01:21.876: INFO: namespace e2e-tests-downward-api-668v8 deletion completed in 6.224916262s • [SLOW TEST:10.831 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:01:21.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:01:22.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ea918b4-8bbc-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-b7pdf" to be "success or failure" May 1 15:01:22.043: INFO: Pod "downwardapi-volume-9ea918b4-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.029751ms May 1 15:01:24.189: INFO: Pod "downwardapi-volume-9ea918b4-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161369941s May 1 15:01:26.192: INFO: Pod "downwardapi-volume-9ea918b4-8bbc-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164867177s STEP: Saw pod success May 1 15:01:26.192: INFO: Pod "downwardapi-volume-9ea918b4-8bbc-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:01:26.196: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9ea918b4-8bbc-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:01:26.282: INFO: Waiting for pod downwardapi-volume-9ea918b4-8bbc-11ea-acf7-0242ac110017 to disappear May 1 15:01:26.356: INFO: Pod downwardapi-volume-9ea918b4-8bbc-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:01:26.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b7pdf" for this suite. May 1 15:01:32.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:01:32.806: INFO: namespace: e2e-tests-projected-b7pdf, resource: bindings, ignored listing per whitelist May 1 15:01:32.864: INFO: namespace e2e-tests-projected-b7pdf deletion completed in 6.50296211s • [SLOW TEST:10.987 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:01:32.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:01:33.028: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a5322843-8bbc-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001893bd6), BlockOwnerDeletion:(*bool)(0xc001893bd7)}} May 1 15:01:33.060: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a530ef23-8bbc-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0018d96ba), BlockOwnerDeletion:(*bool)(0xc0018d96bb)}} May 1 15:01:33.103: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a5317cd1-8bbc-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001893e22), BlockOwnerDeletion:(*bool)(0xc001893e23)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:01:38.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-bhrnj" for this suite. May 1 15:01:44.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:01:44.233: INFO: namespace: e2e-tests-gc-bhrnj, resource: bindings, ignored listing per whitelist May 1 15:01:44.281: INFO: namespace e2e-tests-gc-bhrnj deletion completed in 6.11754852s • [SLOW TEST:11.418 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:01:44.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 1 15:01:44.374: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 15:01:44.382: INFO: Waiting for terminating namespaces to be deleted... May 1 15:01:44.384: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 1 15:01:44.389: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 1 15:01:44.389: INFO: Container kube-proxy ready: true, restart count 0 May 1 15:01:44.389: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 15:01:44.389: INFO: Container kindnet-cni ready: true, restart count 0 May 1 15:01:44.389: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 15:01:44.389: INFO: Container coredns ready: true, restart count 0 May 1 15:01:44.389: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 1 15:01:44.396: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 15:01:44.396: INFO: Container kindnet-cni ready: true, restart count 0 May 1 15:01:44.397: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 15:01:44.397: INFO: Container coredns ready: true, restart count 0 May 1 15:01:44.397: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 15:01:44.397: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160aefa392188b3c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:01:45.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-4gfd2" for this suite. May 1 15:01:51.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:01:51.522: INFO: namespace: e2e-tests-sched-pred-4gfd2, resource: bindings, ignored listing per whitelist May 1 15:01:51.540: INFO: namespace e2e-tests-sched-pred-4gfd2 deletion completed in 6.119372528s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.258 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:01:51.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:01:51.664: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0514789-8bbc-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-b84xx" to be "success or failure" May 1 15:01:51.667: INFO: Pod "downwardapi-volume-b0514789-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.599058ms May 1 15:01:53.676: INFO: Pod "downwardapi-volume-b0514789-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012280618s May 1 15:01:55.681: INFO: Pod "downwardapi-volume-b0514789-8bbc-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.017069486s May 1 15:01:57.686: INFO: Pod "downwardapi-volume-b0514789-8bbc-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021716566s STEP: Saw pod success May 1 15:01:57.686: INFO: Pod "downwardapi-volume-b0514789-8bbc-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:01:57.689: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b0514789-8bbc-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:01:57.711: INFO: Waiting for pod downwardapi-volume-b0514789-8bbc-11ea-acf7-0242ac110017 to disappear May 1 15:01:57.715: INFO: Pod downwardapi-volume-b0514789-8bbc-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:01:57.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b84xx" for this suite. May 1 15:02:03.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:02:03.799: INFO: namespace: e2e-tests-projected-b84xx, resource: bindings, ignored listing per whitelist May 1 15:02:03.849: INFO: namespace e2e-tests-projected-b84xx deletion completed in 6.130301555s • [SLOW TEST:12.309 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:02:03.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:02:03.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7ac257c-8bbc-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-mvqfq" to be "success or failure" May 1 15:02:04.032: INFO: Pod "downwardapi-volume-b7ac257c-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 39.892481ms May 1 15:02:06.037: INFO: Pod "downwardapi-volume-b7ac257c-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044461048s May 1 15:02:08.041: INFO: Pod "downwardapi-volume-b7ac257c-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048490951s May 1 15:02:10.045: INFO: Pod "downwardapi-volume-b7ac257c-8bbc-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052772529s STEP: Saw pod success May 1 15:02:10.045: INFO: Pod "downwardapi-volume-b7ac257c-8bbc-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:02:10.048: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b7ac257c-8bbc-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:02:10.069: INFO: Waiting for pod downwardapi-volume-b7ac257c-8bbc-11ea-acf7-0242ac110017 to disappear May 1 15:02:10.120: INFO: Pod downwardapi-volume-b7ac257c-8bbc-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:02:10.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mvqfq" for this suite. May 1 15:02:16.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:02:16.189: INFO: namespace: e2e-tests-projected-mvqfq, resource: bindings, ignored listing per whitelist May 1 15:02:16.210: INFO: namespace e2e-tests-projected-mvqfq deletion completed in 6.087028367s • [SLOW TEST:12.361 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:02:16.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-v8bs9 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-v8bs9 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-v8bs9 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-v8bs9 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-v8bs9 May 1 15:02:20.457: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v8bs9, name: ss-0, uid: bf41820e-8bbc-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 1 15:02:21.245: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v8bs9, name: ss-0, uid: bf41820e-8bbc-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 1 15:02:21.295: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v8bs9, name: ss-0, uid: bf41820e-8bbc-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 1 15:02:21.356: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-v8bs9 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-v8bs9 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-v8bs9 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 15:02:30.187: INFO: Deleting all statefulset in ns e2e-tests-statefulset-v8bs9 May 1 15:02:30.190: INFO: Scaling statefulset ss to 0 May 1 15:02:40.264: INFO: Waiting for statefulset status.replicas updated to 0 May 1 15:02:40.267: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:02:40.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-v8bs9" for this suite. May 1 15:02:46.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:02:46.408: INFO: namespace: e2e-tests-statefulset-v8bs9, resource: bindings, ignored listing per whitelist May 1 15:02:46.434: INFO: namespace e2e-tests-statefulset-v8bs9 deletion completed in 6.151263674s • [SLOW TEST:30.224 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:02:46.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 1 15:02:46.547: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:02:55.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-jvrgm" for this suite. May 1 15:03:01.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:03:01.673: INFO: namespace: e2e-tests-init-container-jvrgm, resource: bindings, ignored listing per whitelist May 1 15:03:01.711: INFO: namespace e2e-tests-init-container-jvrgm deletion completed in 6.137405914s • [SLOW TEST:15.276 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:03:01.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 15:03:01.817: INFO: Waiting up to 5m0s for pod "downward-api-da23cdad-8bbc-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-q7mqf" to be "success or failure" May 1 15:03:01.820: INFO: Pod "downward-api-da23cdad-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.585784ms May 1 15:03:04.181: INFO: Pod "downward-api-da23cdad-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363771834s May 1 15:03:06.185: INFO: Pod "downward-api-da23cdad-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368295429s May 1 15:03:08.189: INFO: Pod "downward-api-da23cdad-8bbc-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.37228001s STEP: Saw pod success May 1 15:03:08.189: INFO: Pod "downward-api-da23cdad-8bbc-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:03:08.193: INFO: Trying to get logs from node hunter-worker2 pod downward-api-da23cdad-8bbc-11ea-acf7-0242ac110017 container dapi-container: STEP: delete the pod May 1 15:03:08.269: INFO: Waiting for pod downward-api-da23cdad-8bbc-11ea-acf7-0242ac110017 to disappear May 1 15:03:08.456: INFO: Pod downward-api-da23cdad-8bbc-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:03:08.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q7mqf" for this suite. May 1 15:03:14.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:03:14.590: INFO: namespace: e2e-tests-downward-api-q7mqf, resource: bindings, ignored listing per whitelist May 1 15:03:14.642: INFO: namespace e2e-tests-downward-api-q7mqf deletion completed in 6.182095615s • [SLOW TEST:12.931 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:03:14.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:03:14.767: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:03:15.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-m4s46" for this suite. May 1 15:03:21.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:03:21.939: INFO: namespace: e2e-tests-custom-resource-definition-m4s46, resource: bindings, ignored listing per whitelist May 1 15:03:21.970: INFO: namespace e2e-tests-custom-resource-definition-m4s46 deletion completed in 6.108358278s • [SLOW TEST:7.328 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:03:21.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-e6387c75-8bbc-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 15:03:22.091: INFO: Waiting up to 5m0s for pod "pod-secrets-e63929c4-8bbc-11ea-acf7-0242ac110017" in namespace "e2e-tests-secrets-72227" to be "success or failure" May 1 15:03:22.145: INFO: Pod "pod-secrets-e63929c4-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 53.349737ms May 1 15:03:24.150: INFO: Pod "pod-secrets-e63929c4-8bbc-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058009314s May 1 15:03:26.154: INFO: Pod "pod-secrets-e63929c4-8bbc-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062272642s STEP: Saw pod success May 1 15:03:26.154: INFO: Pod "pod-secrets-e63929c4-8bbc-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:03:26.156: INFO: Trying to get logs from node hunter-worker pod pod-secrets-e63929c4-8bbc-11ea-acf7-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 15:03:26.189: INFO: Waiting for pod pod-secrets-e63929c4-8bbc-11ea-acf7-0242ac110017 to disappear May 1 15:03:26.203: INFO: Pod pod-secrets-e63929c4-8bbc-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:03:26.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-72227" for this suite. May 1 15:03:34.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:03:34.317: INFO: namespace: e2e-tests-secrets-72227, resource: bindings, ignored listing per whitelist May 1 15:03:34.342: INFO: namespace e2e-tests-secrets-72227 deletion completed in 8.135069615s • [SLOW TEST:12.372 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:03:34.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-h8sjx STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 15:03:34.621: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 15:04:01.177: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.217:8080/dial?request=hostName&protocol=http&host=10.244.1.216&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-h8sjx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:04:01.177: INFO: >>> kubeConfig: /root/.kube/config I0501 15:04:01.205321 6 log.go:172] (0xc0020fc2c0) (0xc002059c20) Create stream I0501 15:04:01.205351 6 log.go:172] (0xc0020fc2c0) (0xc002059c20) Stream added, broadcasting: 1 I0501 15:04:01.207784 6 log.go:172] (0xc0020fc2c0) Reply frame received for 1 I0501 15:04:01.207848 6 log.go:172] (0xc0020fc2c0) (0xc000865c20) Create stream I0501 15:04:01.207865 6 log.go:172] (0xc0020fc2c0) (0xc000865c20) Stream added, broadcasting: 3 I0501 15:04:01.208942 6 log.go:172] (0xc0020fc2c0) Reply frame received for 3 I0501 15:04:01.208970 6 log.go:172] (0xc0020fc2c0) (0xc0021628c0) Create stream I0501 15:04:01.208983 6 log.go:172] (0xc0020fc2c0) (0xc0021628c0) Stream added, broadcasting: 5 I0501 15:04:01.210593 6 log.go:172] (0xc0020fc2c0) Reply frame received for 5 I0501 15:04:01.313957 6 log.go:172] (0xc0020fc2c0) Data frame received for 3 I0501 15:04:01.313983 6 log.go:172] (0xc000865c20) (3) Data frame handling I0501 15:04:01.314008 6 log.go:172] (0xc000865c20) (3) Data frame sent I0501 15:04:01.314019 6 log.go:172] (0xc0020fc2c0) Data frame received for 3 I0501 15:04:01.314038 6 log.go:172] (0xc0020fc2c0) Data frame received for 5 I0501 15:04:01.314058 6 log.go:172] (0xc0021628c0) (5) Data frame handling I0501 15:04:01.314083 6 log.go:172] (0xc000865c20) (3) Data frame handling I0501 15:04:01.315868 6 log.go:172] (0xc0020fc2c0) Data frame received for 1 I0501 15:04:01.315886 6 log.go:172] (0xc002059c20) (1) Data frame handling I0501 15:04:01.315897 6 log.go:172] (0xc002059c20) (1) Data frame sent I0501 15:04:01.315909 6 log.go:172] (0xc0020fc2c0) (0xc002059c20) Stream removed, broadcasting: 1 I0501 15:04:01.315919 6 log.go:172] (0xc0020fc2c0) Go away received I0501 15:04:01.316117 6 log.go:172] (0xc0020fc2c0) (0xc002059c20) Stream removed, broadcasting: 1 I0501 15:04:01.316154 6 log.go:172] (0xc0020fc2c0) (0xc000865c20) Stream removed, broadcasting: 3 I0501 15:04:01.316168 6 log.go:172] (0xc0020fc2c0) (0xc0021628c0) Stream removed, broadcasting: 5 May 1 15:04:01.316: INFO: Waiting for endpoints: map[] May 1 15:04:01.319: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.217:8080/dial?request=hostName&protocol=http&host=10.244.2.243&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-h8sjx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:04:01.319: INFO: >>> kubeConfig: /root/.kube/config I0501 15:04:01.351948 6 log.go:172] (0xc0016980b0) (0xc00181a0a0) Create stream I0501 15:04:01.351977 6 log.go:172] (0xc0016980b0) (0xc00181a0a0) Stream added, broadcasting: 1 I0501 15:04:01.354281 6 log.go:172] (0xc0016980b0) Reply frame received for 1 I0501 15:04:01.354322 6 log.go:172] (0xc0016980b0) (0xc000940000) Create stream I0501 15:04:01.354338 6 log.go:172] (0xc0016980b0) (0xc000940000) Stream added, broadcasting: 3 I0501 15:04:01.355630 6 log.go:172] (0xc0016980b0) Reply frame received for 3 I0501 15:04:01.355662 6 log.go:172] (0xc0016980b0) (0xc0009400a0) Create stream I0501 15:04:01.355674 6 log.go:172] (0xc0016980b0) (0xc0009400a0) Stream added, broadcasting: 5 I0501 15:04:01.356513 6 log.go:172] (0xc0016980b0) Reply frame received for 5 I0501 15:04:01.432455 6 log.go:172] (0xc0016980b0) Data frame received for 3 I0501 15:04:01.432497 6 log.go:172] (0xc000940000) (3) Data frame handling I0501 15:04:01.432520 6 log.go:172] (0xc000940000) (3) Data frame sent I0501 15:04:01.432808 6 log.go:172] (0xc0016980b0) Data frame received for 5 I0501 15:04:01.432832 6 log.go:172] (0xc0016980b0) Data frame received for 3 I0501 15:04:01.432876 6 log.go:172] (0xc000940000) (3) Data frame handling I0501 15:04:01.432910 6 log.go:172] (0xc0009400a0) (5) Data frame handling I0501 15:04:01.434560 6 log.go:172] (0xc0016980b0) Data frame received for 1 I0501 15:04:01.434576 6 log.go:172] (0xc00181a0a0) (1) Data frame handling I0501 15:04:01.434584 6 log.go:172] (0xc00181a0a0) (1) Data frame sent I0501 15:04:01.434601 6 log.go:172] (0xc0016980b0) (0xc00181a0a0) Stream removed, broadcasting: 1 I0501 15:04:01.434629 6 log.go:172] (0xc0016980b0) Go away received I0501 15:04:01.434677 6 log.go:172] (0xc0016980b0) (0xc00181a0a0) Stream removed, broadcasting: 1 I0501 15:04:01.434691 6 log.go:172] (0xc0016980b0) (0xc000940000) Stream removed, broadcasting: 3 I0501 15:04:01.434697 6 log.go:172] (0xc0016980b0) (0xc0009400a0) Stream removed, broadcasting: 5 May 1 15:04:01.434: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:04:01.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-h8sjx" for this suite. May 1 15:04:25.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:04:25.471: INFO: namespace: e2e-tests-pod-network-test-h8sjx, resource: bindings, ignored listing per whitelist May 1 15:04:25.524: INFO: namespace e2e-tests-pod-network-test-h8sjx deletion completed in 24.086064819s • [SLOW TEST:51.182 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:04:25.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0501 15:05:06.085424 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 15:05:06.085: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:05:06.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zdhw5" for this suite. May 1 15:05:14.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:05:14.144: INFO: namespace: e2e-tests-gc-zdhw5, resource: bindings, ignored listing per whitelist May 1 15:05:14.195: INFO: namespace e2e-tests-gc-zdhw5 deletion completed in 8.106803534s • [SLOW TEST:48.670 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:05:14.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 1 15:05:14.544: INFO: PodSpec: initContainers in spec.initContainers May 1 15:06:09.986: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-29414276-8bbd-11ea-acf7-0242ac110017", GenerateName:"", Namespace:"e2e-tests-init-container-jq5qm", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-jq5qm/pods/pod-init-29414276-8bbd-11ea-acf7-0242ac110017", UID:"2941bebe-8bbd-11ea-99e8-0242ac110002", ResourceVersion:"8188562", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723942314, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"544402943", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-79kns", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001b32000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-79kns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-79kns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-79kns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e212b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fe78c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e21440)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e21460)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001e21468), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e2146c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723942315, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723942315, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723942315, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723942314, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.249", StartTime:(*v1.Time)(0xc0017c80a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0017c8120), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00054f5e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://fa57ff4238620e848a6b5dbc239bc1374ba825b17764df6d587a30563ceb601a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0017c8140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0017c80e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:06:09.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-jq5qm" for this suite. May 1 15:06:32.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:06:32.165: INFO: namespace: e2e-tests-init-container-jq5qm, resource: bindings, ignored listing per whitelist May 1 15:06:32.169: INFO: namespace e2e-tests-init-container-jq5qm deletion completed in 22.151073066s • [SLOW TEST:77.973 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:06:32.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 1 15:06:32.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 1 15:06:32.667: INFO: stderr: "" May 1 15:06:32.667: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:06:32.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mbscw" for this suite. May 1 15:06:38.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:06:38.755: INFO: namespace: e2e-tests-kubectl-mbscw, resource: bindings, ignored listing per whitelist May 1 15:06:38.762: INFO: namespace e2e-tests-kubectl-mbscw deletion completed in 6.091020972s • [SLOW TEST:6.593 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:06:38.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-65vxr May 1 15:06:42.866: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-65vxr STEP: checking the pod's current state and verifying that restartCount is present May 1 15:06:42.869: INFO: Initial restart count of pod liveness-http is 0 May 1 15:07:05.186: INFO: Restart count of pod e2e-tests-container-probe-65vxr/liveness-http is now 1 (22.316961583s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:07:05.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-65vxr" for this suite. May 1 15:07:11.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:07:11.301: INFO: namespace: e2e-tests-container-probe-65vxr, resource: bindings, ignored listing per whitelist May 1 15:07:11.315: INFO: namespace e2e-tests-container-probe-65vxr deletion completed in 6.086648148s • [SLOW TEST:32.553 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:07:11.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-6ef3b409-8bbd-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 15:07:11.555: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ef709a5-8bbd-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-jjxq6" to be "success or failure" May 1 15:07:11.559: INFO: Pod "pod-projected-secrets-6ef709a5-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.286001ms May 1 15:07:13.562: INFO: Pod "pod-projected-secrets-6ef709a5-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007233008s May 1 15:07:15.565: INFO: Pod "pod-projected-secrets-6ef709a5-8bbd-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009977263s STEP: Saw pod success May 1 15:07:15.565: INFO: Pod "pod-projected-secrets-6ef709a5-8bbd-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:07:15.567: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-6ef709a5-8bbd-11ea-acf7-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 15:07:15.731: INFO: Waiting for pod pod-projected-secrets-6ef709a5-8bbd-11ea-acf7-0242ac110017 to disappear May 1 15:07:15.981: INFO: Pod pod-projected-secrets-6ef709a5-8bbd-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:07:15.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jjxq6" for this suite. May 1 15:07:22.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:07:22.075: INFO: namespace: e2e-tests-projected-jjxq6, resource: bindings, ignored listing per whitelist May 1 15:07:22.091: INFO: namespace e2e-tests-projected-jjxq6 deletion completed in 6.105284044s • [SLOW TEST:10.776 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:07:22.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:07:22.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7556e52d-8bbd-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-c4m55" to be "success or failure" May 1 15:07:22.216: INFO: Pod "downwardapi-volume-7556e52d-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.839918ms May 1 15:07:24.220: INFO: Pod "downwardapi-volume-7556e52d-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014009284s May 1 15:07:26.223: INFO: Pod "downwardapi-volume-7556e52d-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017069809s May 1 15:07:28.227: INFO: Pod "downwardapi-volume-7556e52d-8bbd-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021173288s STEP: Saw pod success May 1 15:07:28.227: INFO: Pod "downwardapi-volume-7556e52d-8bbd-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:07:28.230: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7556e52d-8bbd-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:07:28.297: INFO: Waiting for pod downwardapi-volume-7556e52d-8bbd-11ea-acf7-0242ac110017 to disappear May 1 15:07:28.499: INFO: Pod downwardapi-volume-7556e52d-8bbd-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:07:28.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c4m55" for this suite. May 1 15:07:34.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:07:34.663: INFO: namespace: e2e-tests-projected-c4m55, resource: bindings, ignored listing per whitelist May 1 15:07:34.728: INFO: namespace e2e-tests-projected-c4m55 deletion completed in 6.225289452s • [SLOW TEST:12.637 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:07:34.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-7ce338c5-8bbd-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:07:34.876: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ce529a1-8bbd-11ea-acf7-0242ac110017" in namespace "e2e-tests-configmap-pz4gm" to be "success or failure" May 1 15:07:34.881: INFO: Pod "pod-configmaps-7ce529a1-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.793633ms May 1 15:07:37.071: INFO: Pod "pod-configmaps-7ce529a1-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194941256s May 1 15:07:39.075: INFO: Pod "pod-configmaps-7ce529a1-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198790036s May 1 15:07:41.260: INFO: Pod "pod-configmaps-7ce529a1-8bbd-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.384348073s STEP: Saw pod success May 1 15:07:41.261: INFO: Pod "pod-configmaps-7ce529a1-8bbd-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:07:41.265: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-7ce529a1-8bbd-11ea-acf7-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 15:07:41.309: INFO: Waiting for pod pod-configmaps-7ce529a1-8bbd-11ea-acf7-0242ac110017 to disappear May 1 15:07:41.319: INFO: Pod pod-configmaps-7ce529a1-8bbd-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:07:41.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pz4gm" for this suite. May 1 15:07:47.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:07:47.363: INFO: namespace: e2e-tests-configmap-pz4gm, resource: bindings, ignored listing per whitelist May 1 15:07:47.414: INFO: namespace e2e-tests-configmap-pz4gm deletion completed in 6.091663376s • [SLOW TEST:12.685 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:07:47.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 15:07:47.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-9hn8m' May 1 15:07:47.602: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 15:07:47.602: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 1 15:07:47.644: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 1 15:07:47.771: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 1 15:07:47.812: INFO: scanned /root for discovery docs: May 1 15:07:47.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-9hn8m' May 1 15:08:03.893: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 1 15:08:03.893: INFO: stdout: "Created e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd\nScaling up e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 1 15:08:03.893: INFO: stdout: "Created e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd\nScaling up e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 1 15:08:03.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-9hn8m' May 1 15:08:03.988: INFO: stderr: "" May 1 15:08:03.988: INFO: stdout: "e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd-pkr7x " May 1 15:08:03.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd-pkr7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9hn8m' May 1 15:08:04.097: INFO: stderr: "" May 1 15:08:04.097: INFO: stdout: "true" May 1 15:08:04.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd-pkr7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9hn8m' May 1 15:08:04.203: INFO: stderr: "" May 1 15:08:04.203: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 1 15:08:04.203: INFO: e2e-test-nginx-rc-2ce3b5c614c1f9107d0f575adb5f55fd-pkr7x is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 1 15:08:04.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-9hn8m' May 1 15:08:04.324: INFO: stderr: "" May 1 15:08:04.324: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:08:04.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9hn8m" for this suite. May 1 15:08:16.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:08:16.465: INFO: namespace: e2e-tests-kubectl-9hn8m, resource: bindings, ignored listing per whitelist May 1 15:08:16.500: INFO: namespace e2e-tests-kubectl-9hn8m deletion completed in 12.171061361s • [SLOW TEST:29.086 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:08:16.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:09:16.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-n87px" for this suite. May 1 15:09:38.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:09:38.954: INFO: namespace: e2e-tests-container-probe-n87px, resource: bindings, ignored listing per whitelist May 1 15:09:38.963: INFO: namespace e2e-tests-container-probe-n87px deletion completed in 22.130579446s • [SLOW TEST:82.463 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:09:38.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 1 15:09:39.060: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 15:09:39.090: INFO: Waiting for terminating namespaces to be deleted... May 1 15:09:39.092: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 1 15:09:39.096: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 15:09:39.096: INFO: Container kindnet-cni ready: true, restart count 0 May 1 15:09:39.096: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 15:09:39.096: INFO: Container coredns ready: true, restart count 0 May 1 15:09:39.096: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 1 15:09:39.096: INFO: Container kube-proxy ready: true, restart count 0 May 1 15:09:39.096: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 1 15:09:39.102: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 15:09:39.102: INFO: Container kube-proxy ready: true, restart count 0 May 1 15:09:39.102: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 15:09:39.102: INFO: Container kindnet-cni ready: true, restart count 0 May 1 15:09:39.102: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 15:09:39.102: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 1 15:09:39.440: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 1 15:09:39.440: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 1 15:09:39.440: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 1 15:09:39.440: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 1 15:09:39.440: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 1 15:09:39.440: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c7254306-8bbd-11ea-acf7-0242ac110017.160af012301110c5], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-dlq5f/filler-pod-c7254306-8bbd-11ea-acf7-0242ac110017 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c7254306-8bbd-11ea-acf7-0242ac110017.160af01291c94c1e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c7254306-8bbd-11ea-acf7-0242ac110017.160af01335cd521d], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c7254306-8bbd-11ea-acf7-0242ac110017.160af013500320c0], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c72634a0-8bbd-11ea-acf7-0242ac110017.160af012380fbd9c], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-dlq5f/filler-pod-c72634a0-8bbd-11ea-acf7-0242ac110017 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c72634a0-8bbd-11ea-acf7-0242ac110017.160af012b85675ea], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c72634a0-8bbd-11ea-acf7-0242ac110017.160af0134c0c0f44], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c72634a0-8bbd-11ea-acf7-0242ac110017.160af0135d0ccf87], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160af0139ea296a7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:09:46.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-dlq5f" for this suite. May 1 15:09:54.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:09:54.970: INFO: namespace: e2e-tests-sched-pred-dlq5f, resource: bindings, ignored listing per whitelist May 1 15:09:54.995: INFO: namespace e2e-tests-sched-pred-dlq5f deletion completed in 8.130041467s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:16.031 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:09:54.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 1 15:09:59.663: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d0774c2a-8bbd-11ea-acf7-0242ac110017" May 1 15:09:59.663: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d0774c2a-8bbd-11ea-acf7-0242ac110017" in namespace "e2e-tests-pods-pw2pn" to be "terminated due to deadline exceeded" May 1 15:09:59.674: INFO: Pod "pod-update-activedeadlineseconds-d0774c2a-8bbd-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 11.135107ms May 1 15:10:01.678: INFO: Pod "pod-update-activedeadlineseconds-d0774c2a-8bbd-11ea-acf7-0242ac110017": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.015122759s May 1 15:10:01.678: INFO: Pod "pod-update-activedeadlineseconds-d0774c2a-8bbd-11ea-acf7-0242ac110017" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:10:01.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pw2pn" for this suite. May 1 15:10:07.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:10:07.741: INFO: namespace: e2e-tests-pods-pw2pn, resource: bindings, ignored listing per whitelist May 1 15:10:07.766: INFO: namespace e2e-tests-pods-pw2pn deletion completed in 6.08409573s • [SLOW TEST:12.771 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:10:07.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 1 15:10:12.962: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:10:14.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-889qh" for this suite. May 1 15:10:38.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:10:38.149: INFO: namespace: e2e-tests-replicaset-889qh, resource: bindings, ignored listing per whitelist May 1 15:10:38.201: INFO: namespace e2e-tests-replicaset-889qh deletion completed in 24.14396982s • [SLOW TEST:30.435 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:10:38.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:10:38.328: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 1 15:10:43.343: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 15:10:43.343: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 15:10:43.364: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-p55j8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p55j8/deployments/test-cleanup-deployment,UID:ed3ce086-8bbd-11ea-99e8-0242ac110002,ResourceVersion:8189472,Generation:1,CreationTimestamp:2020-05-01 15:10:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 1 15:10:43.370: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 1 15:10:43.370: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 1 15:10:43.371: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-p55j8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p55j8/replicasets/test-cleanup-controller,UID:ea3bee8a-8bbd-11ea-99e8-0242ac110002,ResourceVersion:8189473,Generation:1,CreationTimestamp:2020-05-01 15:10:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ed3ce086-8bbd-11ea-99e8-0242ac110002 0xc0018920a7 0xc0018920a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 1 15:10:43.377: INFO: Pod "test-cleanup-controller-sp99l" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-sp99l,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-p55j8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p55j8/pods/test-cleanup-controller-sp99l,UID:ea3f7e0d-8bbd-11ea-99e8-0242ac110002,ResourceVersion:8189467,Generation:0,CreationTimestamp:2020-05-01 15:10:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller ea3bee8a-8bbd-11ea-99e8-0242ac110002 0xc001b20917 0xc001b20918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-d9wgz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d9wgz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-d9wgz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b209a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b209c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:10:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:10:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:10:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:10:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.230,StartTime:2020-05-01 15:10:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 15:10:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2dfac71d9c29a18f27917335a1d649e852ddd1f671b36d401f05c4e04b728948}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:10:43.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-p55j8" for this suite. May 1 15:10:51.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:10:51.566: INFO: namespace: e2e-tests-deployment-p55j8, resource: bindings, ignored listing per whitelist May 1 15:10:51.593: INFO: namespace e2e-tests-deployment-p55j8 deletion completed in 8.166192583s • [SLOW TEST:13.392 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:10:51.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 15:10:51.685: INFO: Waiting up to 5m0s for pod "pod-f23283c4-8bbd-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-rh8n4" to be "success or failure" May 1 15:10:51.688: INFO: Pod "pod-f23283c4-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.016892ms May 1 15:10:53.692: INFO: Pod "pod-f23283c4-8bbd-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006839839s May 1 15:10:55.696: INFO: Pod "pod-f23283c4-8bbd-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010834789s STEP: Saw pod success May 1 15:10:55.696: INFO: Pod "pod-f23283c4-8bbd-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:10:55.699: INFO: Trying to get logs from node hunter-worker pod pod-f23283c4-8bbd-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 15:10:55.750: INFO: Waiting for pod pod-f23283c4-8bbd-11ea-acf7-0242ac110017 to disappear May 1 15:10:55.766: INFO: Pod pod-f23283c4-8bbd-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:10:55.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rh8n4" for this suite. May 1 15:11:01.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:11:01.854: INFO: namespace: e2e-tests-emptydir-rh8n4, resource: bindings, ignored listing per whitelist May 1 15:11:01.894: INFO: namespace e2e-tests-emptydir-rh8n4 deletion completed in 6.09504318s • [SLOW TEST:10.301 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:11:01.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:11:02.027: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 1 15:11:02.036: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:02.038: INFO: Number of nodes with available pods: 0 May 1 15:11:02.038: INFO: Node hunter-worker is running more than one daemon pod May 1 15:11:03.043: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:03.047: INFO: Number of nodes with available pods: 0 May 1 15:11:03.047: INFO: Node hunter-worker is running more than one daemon pod May 1 15:11:04.218: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:04.221: INFO: Number of nodes with available pods: 0 May 1 15:11:04.221: INFO: Node hunter-worker is running more than one daemon pod May 1 15:11:05.128: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:05.131: INFO: Number of nodes with available pods: 0 May 1 15:11:05.131: INFO: Node hunter-worker is running more than one daemon pod May 1 15:11:06.050: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:06.054: INFO: Number of nodes with available pods: 1 May 1 15:11:06.054: INFO: Node hunter-worker2 is running more than one daemon pod May 1 15:11:07.042: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:07.045: INFO: Number of nodes with available pods: 2 May 1 15:11:07.045: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 1 15:11:07.091: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:07.091: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:07.109: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:08.113: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:08.113: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:08.116: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:09.116: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:09.116: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:09.119: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:10.114: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:10.114: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:10.119: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:11.116: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:11.116: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:11.116: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:11.120: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:12.113: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:12.113: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:12.113: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:12.119: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:13.113: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:13.113: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:13.113: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:13.117: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:14.113: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:14.113: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:14.113: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:14.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:15.114: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:15.114: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:15.114: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:15.117: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:16.112: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:16.112: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:16.112: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:16.115: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:17.114: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:17.114: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:17.114: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:17.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:18.112: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:18.112: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:18.112: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:18.115: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:19.114: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:19.114: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:19.114: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:19.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:20.114: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:20.114: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:20.114: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:20.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:21.113: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:21.113: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:21.113: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:21.117: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:22.266: INFO: Wrong image for pod: daemon-set-5l8hz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:22.266: INFO: Pod daemon-set-5l8hz is not available May 1 15:11:22.266: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:22.277: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:23.113: INFO: Pod daemon-set-hjx5r is not available May 1 15:11:23.113: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:23.117: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:24.114: INFO: Pod daemon-set-hjx5r is not available May 1 15:11:24.114: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:24.117: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:25.113: INFO: Pod daemon-set-hjx5r is not available May 1 15:11:25.113: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:25.116: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:26.114: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:26.117: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:27.112: INFO: Wrong image for pod: daemon-set-rxhz7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 15:11:27.112: INFO: Pod daemon-set-rxhz7 is not available May 1 15:11:27.114: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:28.188: INFO: Pod daemon-set-hwh4v is not available May 1 15:11:28.192: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 1 15:11:28.236: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:28.239: INFO: Number of nodes with available pods: 1 May 1 15:11:28.239: INFO: Node hunter-worker is running more than one daemon pod May 1 15:11:29.244: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:29.256: INFO: Number of nodes with available pods: 1 May 1 15:11:29.256: INFO: Node hunter-worker is running more than one daemon pod May 1 15:11:30.244: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:30.248: INFO: Number of nodes with available pods: 1 May 1 15:11:30.248: INFO: Node hunter-worker is running more than one daemon pod May 1 15:11:31.244: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:31.246: INFO: Number of nodes with available pods: 1 May 1 15:11:31.246: INFO: Node hunter-worker is running more than one daemon pod May 1 15:11:32.244: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:11:32.247: INFO: Number of nodes with available pods: 2 May 1 15:11:32.247: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4k2wx, will wait for the garbage collector to delete the pods May 1 15:11:32.320: INFO: Deleting DaemonSet.extensions daemon-set took: 7.400565ms May 1 15:11:32.420: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.252341ms May 1 15:11:41.323: INFO: Number of nodes with available pods: 0 May 1 15:11:41.323: INFO: Number of running nodes: 0, number of available pods: 0 May 1 15:11:41.325: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4k2wx/daemonsets","resourceVersion":"8189728"},"items":null} May 1 15:11:41.327: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4k2wx/pods","resourceVersion":"8189728"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:11:41.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4k2wx" for this suite. May 1 15:11:47.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:11:47.387: INFO: namespace: e2e-tests-daemonsets-4k2wx, resource: bindings, ignored listing per whitelist May 1 15:11:47.438: INFO: namespace e2e-tests-daemonsets-4k2wx deletion completed in 6.09951025s • [SLOW TEST:45.544 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:11:47.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:11:48.156: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13d6d298-8bbe-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-6h6tn" to be "success or failure" May 1 15:11:48.167: INFO: Pod "downwardapi-volume-13d6d298-8bbe-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.825274ms May 1 15:11:50.171: INFO: Pod "downwardapi-volume-13d6d298-8bbe-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015455013s May 1 15:11:52.290: INFO: Pod "downwardapi-volume-13d6d298-8bbe-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134451072s STEP: Saw pod success May 1 15:11:52.290: INFO: Pod "downwardapi-volume-13d6d298-8bbe-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:11:52.293: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-13d6d298-8bbe-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:11:52.603: INFO: Waiting for pod downwardapi-volume-13d6d298-8bbe-11ea-acf7-0242ac110017 to disappear May 1 15:11:52.665: INFO: Pod downwardapi-volume-13d6d298-8bbe-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:11:52.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6h6tn" for this suite. May 1 15:11:58.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:11:59.669: INFO: namespace: e2e-tests-downward-api-6h6tn, resource: bindings, ignored listing per whitelist May 1 15:11:59.711: INFO: namespace e2e-tests-downward-api-6h6tn deletion completed in 7.041705363s • [SLOW TEST:12.273 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:11:59.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-f2p4 STEP: Creating a pod to test atomic-volume-subpath May 1 15:12:00.567: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-f2p4" in namespace "e2e-tests-subpath-lrfz4" to be "success or failure" May 1 15:12:00.818: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Pending", Reason="", readiness=false. Elapsed: 250.737473ms May 1 15:12:02.822: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254923113s May 1 15:12:04.825: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258703917s May 1 15:12:06.830: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263528053s May 1 15:12:08.841: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Running", Reason="", readiness=false. Elapsed: 8.274322326s May 1 15:12:10.846: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Running", Reason="", readiness=false. Elapsed: 10.278912355s May 1 15:12:12.849: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Running", Reason="", readiness=false. Elapsed: 12.28241737s May 1 15:12:14.853: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Running", Reason="", readiness=false. Elapsed: 14.285777999s May 1 15:12:16.856: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Running", Reason="", readiness=false. Elapsed: 16.289664015s May 1 15:12:18.861: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Running", Reason="", readiness=false. Elapsed: 18.294148584s May 1 15:12:20.864: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Running", Reason="", readiness=false. Elapsed: 20.29769268s May 1 15:12:22.869: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Running", Reason="", readiness=false. Elapsed: 22.302181727s May 1 15:12:24.872: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Running", Reason="", readiness=false. Elapsed: 24.305465612s May 1 15:12:26.876: INFO: Pod "pod-subpath-test-downwardapi-f2p4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.309186174s STEP: Saw pod success May 1 15:12:26.876: INFO: Pod "pod-subpath-test-downwardapi-f2p4" satisfied condition "success or failure" May 1 15:12:26.878: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-f2p4 container test-container-subpath-downwardapi-f2p4: STEP: delete the pod May 1 15:12:26.931: INFO: Waiting for pod pod-subpath-test-downwardapi-f2p4 to disappear May 1 15:12:27.149: INFO: Pod pod-subpath-test-downwardapi-f2p4 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-f2p4 May 1 15:12:27.149: INFO: Deleting pod "pod-subpath-test-downwardapi-f2p4" in namespace "e2e-tests-subpath-lrfz4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:12:27.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lrfz4" for this suite. May 1 15:12:35.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:12:35.259: INFO: namespace: e2e-tests-subpath-lrfz4, resource: bindings, ignored listing per whitelist May 1 15:12:35.262: INFO: namespace e2e-tests-subpath-lrfz4 deletion completed in 8.106957996s • [SLOW TEST:35.550 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:12:35.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 1 15:12:36.125: INFO: Waiting up to 5m0s for pod "pod-30566d37-8bbe-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-hlrrb" to be "success or failure" May 1 15:12:36.164: INFO: Pod "pod-30566d37-8bbe-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 38.527446ms May 1 15:12:38.168: INFO: Pod "pod-30566d37-8bbe-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042441315s May 1 15:12:40.458: INFO: Pod "pod-30566d37-8bbe-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.333013304s May 1 15:12:42.463: INFO: Pod "pod-30566d37-8bbe-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.33747928s STEP: Saw pod success May 1 15:12:42.463: INFO: Pod "pod-30566d37-8bbe-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:12:42.466: INFO: Trying to get logs from node hunter-worker pod pod-30566d37-8bbe-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 15:12:42.590: INFO: Waiting for pod pod-30566d37-8bbe-11ea-acf7-0242ac110017 to disappear May 1 15:12:42.600: INFO: Pod pod-30566d37-8bbe-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:12:42.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hlrrb" for this suite. May 1 15:12:48.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:12:48.970: INFO: namespace: e2e-tests-emptydir-hlrrb, resource: bindings, ignored listing per whitelist May 1 15:12:48.999: INFO: namespace e2e-tests-emptydir-hlrrb deletion completed in 6.395664545s • [SLOW TEST:13.737 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:12:48.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:12:49.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-383b9f7d-8bbe-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-qlfdq" to be "success or failure" May 1 15:12:49.235: INFO: Pod "downwardapi-volume-383b9f7d-8bbe-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.897731ms May 1 15:12:51.309: INFO: Pod "downwardapi-volume-383b9f7d-8bbe-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095207206s May 1 15:12:53.313: INFO: Pod "downwardapi-volume-383b9f7d-8bbe-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.098964235s May 1 15:12:55.316: INFO: Pod "downwardapi-volume-383b9f7d-8bbe-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102879075s STEP: Saw pod success May 1 15:12:55.317: INFO: Pod "downwardapi-volume-383b9f7d-8bbe-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:12:55.320: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-383b9f7d-8bbe-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:12:55.567: INFO: Waiting for pod downwardapi-volume-383b9f7d-8bbe-11ea-acf7-0242ac110017 to disappear May 1 15:12:55.607: INFO: Pod downwardapi-volume-383b9f7d-8bbe-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:12:55.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qlfdq" for this suite. May 1 15:13:01.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:13:02.022: INFO: namespace: e2e-tests-projected-qlfdq, resource: bindings, ignored listing per whitelist May 1 15:13:02.059: INFO: namespace e2e-tests-projected-qlfdq deletion completed in 6.448047884s • [SLOW TEST:13.060 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:13:02.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 1 15:13:02.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 1 15:13:02.686: INFO: stderr: "" May 1 15:13:02.686: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:13:02.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bw7mh" for this suite. May 1 15:13:10.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:13:10.918: INFO: namespace: e2e-tests-kubectl-bw7mh, resource: bindings, ignored listing per whitelist May 1 15:13:10.919: INFO: namespace e2e-tests-kubectl-bw7mh deletion completed in 8.228846374s • [SLOW TEST:8.860 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:13:10.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 1 15:13:17.997: INFO: 10 pods remaining May 1 15:13:17.997: INFO: 10 pods has nil DeletionTimestamp May 1 15:13:17.997: INFO: May 1 15:13:19.388: INFO: 9 pods remaining May 1 15:13:19.388: INFO: 0 pods has nil DeletionTimestamp May 1 15:13:19.388: INFO: May 1 15:13:21.112: INFO: 0 pods remaining May 1 15:13:21.112: INFO: 0 pods has nil DeletionTimestamp May 1 15:13:21.112: INFO: May 1 15:13:21.920: INFO: 0 pods remaining May 1 15:13:21.920: INFO: 0 pods has nil DeletionTimestamp May 1 15:13:21.920: INFO: May 1 15:13:22.979: INFO: 0 pods remaining May 1 15:13:22.979: INFO: 0 pods has nil DeletionTimestamp May 1 15:13:22.979: INFO: STEP: Gathering metrics W0501 15:13:23.811110 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 15:13:23.811: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:13:23.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9h2f7" for this suite. May 1 15:13:32.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:13:32.514: INFO: namespace: e2e-tests-gc-9h2f7, resource: bindings, ignored listing per whitelist May 1 15:13:32.543: INFO: namespace e2e-tests-gc-9h2f7 deletion completed in 8.729518789s • [SLOW TEST:21.624 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:13:32.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:13:38.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-npcp6" for this suite. May 1 15:14:24.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:14:24.912: INFO: namespace: e2e-tests-kubelet-test-npcp6, resource: bindings, ignored listing per whitelist May 1 15:14:25.021: INFO: namespace e2e-tests-kubelet-test-npcp6 deletion completed in 46.218379376s • [SLOW TEST:52.477 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:14:25.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-720160c0-8bbe-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:14:26.126: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7201fc8b-8bbe-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-8cj6x" to be "success or failure" May 1 15:14:26.195: INFO: Pod "pod-projected-configmaps-7201fc8b-8bbe-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 69.59914ms May 1 15:14:28.268: INFO: Pod "pod-projected-configmaps-7201fc8b-8bbe-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142524215s May 1 15:14:30.435: INFO: Pod "pod-projected-configmaps-7201fc8b-8bbe-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309697787s May 1 15:14:33.197: INFO: Pod "pod-projected-configmaps-7201fc8b-8bbe-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.071185848s STEP: Saw pod success May 1 15:14:33.197: INFO: Pod "pod-projected-configmaps-7201fc8b-8bbe-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:14:33.200: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7201fc8b-8bbe-11ea-acf7-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 15:14:33.459: INFO: Waiting for pod pod-projected-configmaps-7201fc8b-8bbe-11ea-acf7-0242ac110017 to disappear May 1 15:14:33.463: INFO: Pod pod-projected-configmaps-7201fc8b-8bbe-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:14:33.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8cj6x" for this suite. May 1 15:14:39.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:14:39.621: INFO: namespace: e2e-tests-projected-8cj6x, resource: bindings, ignored listing per whitelist May 1 15:14:39.663: INFO: namespace e2e-tests-projected-8cj6x deletion completed in 6.196848829s • [SLOW TEST:14.642 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:14:39.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 1 15:14:42.596: INFO: Pod name wrapped-volume-race-7bc12768-8bbe-11ea-acf7-0242ac110017: Found 0 pods out of 5 May 1 15:14:48.184: INFO: Pod name wrapped-volume-race-7bc12768-8bbe-11ea-acf7-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7bc12768-8bbe-11ea-acf7-0242ac110017 in namespace e2e-tests-emptydir-wrapper-czjd9, will wait for the garbage collector to delete the pods May 1 15:17:23.106: INFO: Deleting ReplicationController wrapped-volume-race-7bc12768-8bbe-11ea-acf7-0242ac110017 took: 6.695243ms May 1 15:17:23.207: INFO: Terminating ReplicationController wrapped-volume-race-7bc12768-8bbe-11ea-acf7-0242ac110017 pods took: 100.162973ms STEP: Creating RC which spawns configmap-volume pods May 1 15:18:13.384: INFO: Pod name wrapped-volume-race-f9137eba-8bbe-11ea-acf7-0242ac110017: Found 0 pods out of 5 May 1 15:18:18.391: INFO: Pod name wrapped-volume-race-f9137eba-8bbe-11ea-acf7-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f9137eba-8bbe-11ea-acf7-0242ac110017 in namespace e2e-tests-emptydir-wrapper-czjd9, will wait for the garbage collector to delete the pods May 1 15:20:10.468: INFO: Deleting ReplicationController wrapped-volume-race-f9137eba-8bbe-11ea-acf7-0242ac110017 took: 5.843585ms May 1 15:20:14.468: INFO: Terminating ReplicationController wrapped-volume-race-f9137eba-8bbe-11ea-acf7-0242ac110017 pods took: 4.000268704s STEP: Creating RC which spawns configmap-volume pods May 1 15:21:02.123: INFO: Pod name wrapped-volume-race-5df6dade-8bbf-11ea-acf7-0242ac110017: Found 0 pods out of 5 May 1 15:21:07.151: INFO: Pod name wrapped-volume-race-5df6dade-8bbf-11ea-acf7-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5df6dade-8bbf-11ea-acf7-0242ac110017 in namespace e2e-tests-emptydir-wrapper-czjd9, will wait for the garbage collector to delete the pods May 1 15:23:47.433: INFO: Deleting ReplicationController wrapped-volume-race-5df6dade-8bbf-11ea-acf7-0242ac110017 took: 6.406787ms May 1 15:23:48.733: INFO: Terminating ReplicationController wrapped-volume-race-5df6dade-8bbf-11ea-acf7-0242ac110017 pods took: 1.300236798s STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:24:37.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-czjd9" for this suite. May 1 15:24:49.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:24:49.468: INFO: namespace: e2e-tests-emptydir-wrapper-czjd9, resource: bindings, ignored listing per whitelist May 1 15:24:49.511: INFO: namespace e2e-tests-emptydir-wrapper-czjd9 deletion completed in 12.101899165s • [SLOW TEST:609.849 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:24:49.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:24:49.771: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-rq8px" to be "success or failure" May 1 15:24:49.911: INFO: Pod "downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 140.63505ms May 1 15:24:51.915: INFO: Pod "downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144074788s May 1 15:24:54.039: INFO: Pod "downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268306076s May 1 15:24:56.124: INFO: Pod "downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353596805s May 1 15:24:58.144: INFO: Pod "downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.373598838s STEP: Saw pod success May 1 15:24:58.145: INFO: Pod "downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:24:58.148: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:24:58.509: INFO: Waiting for pod downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017 to disappear May 1 15:24:58.743: INFO: Pod downwardapi-volume-e5b71891-8bbf-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:24:58.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rq8px" for this suite. May 1 15:25:09.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:25:09.315: INFO: namespace: e2e-tests-downward-api-rq8px, resource: bindings, ignored listing per whitelist May 1 15:25:09.372: INFO: namespace e2e-tests-downward-api-rq8px deletion completed in 10.625518169s • [SLOW TEST:19.860 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:25:09.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:25:11.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-7cksv" to be "success or failure" May 1 15:25:11.768: INFO: Pod "downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 574.09519ms May 1 15:25:13.773: INFO: Pod "downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.578255357s May 1 15:25:16.504: INFO: Pod "downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.310026808s May 1 15:25:18.509: INFO: Pod "downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.314745497s May 1 15:25:20.512: INFO: Pod "downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.31747911s May 1 15:25:22.516: INFO: Pod "downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.321659645s STEP: Saw pod success May 1 15:25:22.516: INFO: Pod "downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:25:22.519: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:25:23.266: INFO: Waiting for pod downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017 to disappear May 1 15:25:23.584: INFO: Pod downwardapi-volume-f24640ee-8bbf-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:25:23.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7cksv" for this suite. May 1 15:25:29.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:25:29.783: INFO: namespace: e2e-tests-projected-7cksv, resource: bindings, ignored listing per whitelist May 1 15:25:29.817: INFO: namespace e2e-tests-projected-7cksv deletion completed in 6.172382012s • [SLOW TEST:20.445 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:25:29.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-fdb5f4cc-8bbf-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 15:25:30.017: INFO: Waiting up to 5m0s for pod "pod-secrets-fdb69f54-8bbf-11ea-acf7-0242ac110017" in namespace "e2e-tests-secrets-8hcht" to be "success or failure" May 1 15:25:30.021: INFO: Pod "pod-secrets-fdb69f54-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.738386ms May 1 15:25:32.346: INFO: Pod "pod-secrets-fdb69f54-8bbf-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328853513s May 1 15:25:34.351: INFO: Pod "pod-secrets-fdb69f54-8bbf-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33307571s STEP: Saw pod success May 1 15:25:34.351: INFO: Pod "pod-secrets-fdb69f54-8bbf-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:25:34.354: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-fdb69f54-8bbf-11ea-acf7-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 15:25:34.710: INFO: Waiting for pod pod-secrets-fdb69f54-8bbf-11ea-acf7-0242ac110017 to disappear May 1 15:25:34.722: INFO: Pod pod-secrets-fdb69f54-8bbf-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:25:34.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8hcht" for this suite. May 1 15:25:42.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:25:42.838: INFO: namespace: e2e-tests-secrets-8hcht, resource: bindings, ignored listing per whitelist May 1 15:25:42.856: INFO: namespace e2e-tests-secrets-8hcht deletion completed in 8.131183021s • [SLOW TEST:13.038 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:25:42.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-jq597 May 1 15:25:49.737: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-jq597 STEP: checking the pod's current state and verifying that restartCount is present May 1 15:25:49.739: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:29:50.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jq597" for this suite. May 1 15:29:57.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:29:57.267: INFO: namespace: e2e-tests-container-probe-jq597, resource: bindings, ignored listing per whitelist May 1 15:29:57.300: INFO: namespace e2e-tests-container-probe-jq597 deletion completed in 6.203639208s • [SLOW TEST:254.443 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:29:57.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 1 15:29:57.557: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:30:05.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-489g7" for this suite. May 1 15:30:27.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:30:27.737: INFO: namespace: e2e-tests-init-container-489g7, resource: bindings, ignored listing per whitelist May 1 15:30:27.747: INFO: namespace e2e-tests-init-container-489g7 deletion completed in 22.094683148s • [SLOW TEST:30.447 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:30:27.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 1 15:30:27.909: INFO: Waiting up to 5m0s for pod "var-expansion-af47977d-8bc0-11ea-acf7-0242ac110017" in namespace "e2e-tests-var-expansion-zz5pw" to be "success or failure" May 1 15:30:27.912: INFO: Pod "var-expansion-af47977d-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.532004ms May 1 15:30:29.915: INFO: Pod "var-expansion-af47977d-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006258319s May 1 15:30:31.919: INFO: Pod "var-expansion-af47977d-8bc0-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009947369s STEP: Saw pod success May 1 15:30:31.919: INFO: Pod "var-expansion-af47977d-8bc0-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:30:31.922: INFO: Trying to get logs from node hunter-worker pod var-expansion-af47977d-8bc0-11ea-acf7-0242ac110017 container dapi-container: STEP: delete the pod May 1 15:30:31.971: INFO: Waiting for pod var-expansion-af47977d-8bc0-11ea-acf7-0242ac110017 to disappear May 1 15:30:31.991: INFO: Pod var-expansion-af47977d-8bc0-11ea-acf7-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:30:31.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-zz5pw" for this suite. May 1 15:30:38.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:30:38.056: INFO: namespace: e2e-tests-var-expansion-zz5pw, resource: bindings, ignored listing per whitelist May 1 15:30:38.084: INFO: namespace e2e-tests-var-expansion-zz5pw deletion completed in 6.089850859s • [SLOW TEST:10.337 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:30:38.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 1 15:30:46.974: INFO: Pod pod-hostip-b567ce72-8bc0-11ea-acf7-0242ac110017 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:30:46.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dtk7g" for this suite. May 1 15:31:15.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:31:15.532: INFO: namespace: e2e-tests-pods-dtk7g, resource: bindings, ignored listing per whitelist May 1 15:31:15.568: INFO: namespace e2e-tests-pods-dtk7g deletion completed in 28.590956121s • [SLOW TEST:37.483 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:31:15.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-cc5de23e-8bc0-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 15:31:16.913: INFO: Waiting up to 5m0s for pod "pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017" in namespace "e2e-tests-secrets-mmpd6" to be "success or failure" May 1 15:31:16.957: INFO: Pod "pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 43.553939ms May 1 15:31:18.961: INFO: Pod "pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047840462s May 1 15:31:21.643: INFO: Pod "pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.729629245s May 1 15:31:23.648: INFO: Pod "pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.734804108s May 1 15:31:25.717: INFO: Pod "pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.803949494s May 1 15:31:27.821: INFO: Pod "pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.907396616s May 1 15:31:29.909: INFO: Pod "pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.995585719s STEP: Saw pod success May 1 15:31:29.909: INFO: Pod "pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:31:29.911: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017 container secret-env-test: STEP: delete the pod May 1 15:31:30.457: INFO: Waiting for pod pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017 to disappear May 1 15:31:30.777: INFO: Pod pod-secrets-cc65a564-8bc0-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:31:30.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mmpd6" for this suite. May 1 15:31:39.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:31:39.220: INFO: namespace: e2e-tests-secrets-mmpd6, resource: bindings, ignored listing per whitelist May 1 15:31:39.272: INFO: namespace e2e-tests-secrets-mmpd6 deletion completed in 8.490054293s • [SLOW TEST:23.704 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:31:39.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-da5ac5bd-8bc0-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:31:40.590: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-z7jph" to be "success or failure" May 1 15:31:40.686: INFO: Pod "pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 96.086696ms May 1 15:31:43.036: INFO: Pod "pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.446713674s May 1 15:31:45.072: INFO: Pod "pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482036161s May 1 15:31:47.096: INFO: Pod "pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5063348s May 1 15:31:49.100: INFO: Pod "pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.510548075s STEP: Saw pod success May 1 15:31:49.100: INFO: Pod "pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:31:49.104: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 15:31:49.443: INFO: Waiting for pod pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017 to disappear May 1 15:31:50.086: INFO: Pod pod-projected-configmaps-da6423ad-8bc0-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:31:50.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z7jph" for this suite. May 1 15:31:56.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:31:56.343: INFO: namespace: e2e-tests-projected-z7jph, resource: bindings, ignored listing per whitelist May 1 15:31:56.396: INFO: namespace e2e-tests-projected-z7jph deletion completed in 6.305342865s • [SLOW TEST:17.124 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:31:56.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-e423fcd5-8bc0-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 15:31:56.631: INFO: Waiting up to 5m0s for pod "pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017" in namespace "e2e-tests-secrets-5rgvn" to be "success or failure" May 1 15:31:56.677: INFO: Pod "pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 45.975156ms May 1 15:31:58.680: INFO: Pod "pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049003737s May 1 15:32:00.886: INFO: Pod "pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255825566s May 1 15:32:03.025: INFO: Pod "pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394349818s May 1 15:32:05.028: INFO: Pod "pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.397740066s STEP: Saw pod success May 1 15:32:05.028: INFO: Pod "pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:32:05.031: INFO: Trying to get logs from node hunter-worker pod pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 15:32:05.568: INFO: Waiting for pod pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017 to disappear May 1 15:32:05.598: INFO: Pod pod-secrets-e428a7cc-8bc0-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:32:05.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5rgvn" for this suite. May 1 15:32:13.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:32:13.839: INFO: namespace: e2e-tests-secrets-5rgvn, resource: bindings, ignored listing per whitelist May 1 15:32:13.857: INFO: namespace e2e-tests-secrets-5rgvn deletion completed in 8.256338652s • [SLOW TEST:17.461 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:32:13.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:32:14.338: INFO: Creating ReplicaSet my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017 May 1 15:32:14.368: INFO: Pod name my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017: Found 0 pods out of 1 May 1 15:32:19.373: INFO: Pod name my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017: Found 1 pods out of 1 May 1 15:32:19.373: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017" is running May 1 15:32:21.732: INFO: Pod "my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017-scgtk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:32:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:32:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:32:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:32:14 +0000 UTC Reason: Message:}]) May 1 15:32:21.732: INFO: Trying to dial the pod May 1 15:32:26.743: INFO: Controller my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017: Got expected result from replica 1 [my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017-scgtk]: "my-hostname-basic-eeba4229-8bc0-11ea-acf7-0242ac110017-scgtk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:32:26.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-7vxwx" for this suite. May 1 15:32:32.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:32:32.860: INFO: namespace: e2e-tests-replicaset-7vxwx, resource: bindings, ignored listing per whitelist May 1 15:32:32.863: INFO: namespace e2e-tests-replicaset-7vxwx deletion completed in 6.115807044s • [SLOW TEST:19.005 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:32:32.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:32:33.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9dff5fd-8bc0-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-f9spl" to be "success or failure" May 1 15:32:33.090: INFO: Pod "downwardapi-volume-f9dff5fd-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.382687ms May 1 15:32:35.094: INFO: Pod "downwardapi-volume-f9dff5fd-8bc0-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013694404s May 1 15:32:37.099: INFO: Pod "downwardapi-volume-f9dff5fd-8bc0-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018401202s STEP: Saw pod success May 1 15:32:37.099: INFO: Pod "downwardapi-volume-f9dff5fd-8bc0-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:32:37.103: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f9dff5fd-8bc0-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:32:37.333: INFO: Waiting for pod downwardapi-volume-f9dff5fd-8bc0-11ea-acf7-0242ac110017 to disappear May 1 15:32:37.408: INFO: Pod downwardapi-volume-f9dff5fd-8bc0-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:32:37.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f9spl" for this suite. May 1 15:32:43.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:32:43.471: INFO: namespace: e2e-tests-projected-f9spl, resource: bindings, ignored listing per whitelist May 1 15:32:43.497: INFO: namespace e2e-tests-projected-f9spl deletion completed in 6.08588341s • [SLOW TEST:10.634 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:32:43.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-qt45q STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qt45q to expose endpoints map[] May 1 15:32:43.693: INFO: Get endpoints failed (12.820702ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 1 15:32:44.697: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qt45q exposes endpoints map[] (1.017192907s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-qt45q STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qt45q to expose endpoints map[pod1:[100]] May 1 15:32:48.843: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.139154466s elapsed, will retry) May 1 15:32:49.850: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qt45q exposes endpoints map[pod1:[100]] (5.146233605s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-qt45q STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qt45q to expose endpoints map[pod2:[101] pod1:[100]] May 1 15:32:52.930: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qt45q exposes endpoints map[pod1:[100] pod2:[101]] (3.075363715s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-qt45q STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qt45q to expose endpoints map[pod2:[101]] May 1 15:32:54.037: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qt45q exposes endpoints map[pod2:[101]] (1.103015153s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-qt45q STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qt45q to expose endpoints map[] May 1 15:32:55.360: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qt45q exposes endpoints map[] (1.319897031s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:32:55.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-qt45q" for this suite. May 1 15:33:03.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:33:03.977: INFO: namespace: e2e-tests-services-qt45q, resource: bindings, ignored listing per whitelist May 1 15:33:03.988: INFO: namespace e2e-tests-services-qt45q deletion completed in 8.099893719s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:20.491 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:33:03.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 1 15:33:04.435: INFO: Waiting up to 5m0s for pod "var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017" in namespace "e2e-tests-var-expansion-dbksc" to be "success or failure" May 1 15:33:04.487: INFO: Pod "var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 52.008988ms May 1 15:33:06.612: INFO: Pod "var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177681906s May 1 15:33:08.720: INFO: Pod "var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285379293s May 1 15:33:10.723: INFO: Pod "var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.288537981s May 1 15:33:12.727: INFO: Pod "var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.291795998s STEP: Saw pod success May 1 15:33:12.727: INFO: Pod "var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:33:12.730: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017 container dapi-container: STEP: delete the pod May 1 15:33:13.269: INFO: Waiting for pod var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017 to disappear May 1 15:33:13.310: INFO: Pod var-expansion-0c7bbe5d-8bc1-11ea-acf7-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:33:13.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-dbksc" for this suite. May 1 15:33:19.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:33:19.567: INFO: namespace: e2e-tests-var-expansion-dbksc, resource: bindings, ignored listing per whitelist May 1 15:33:19.586: INFO: namespace e2e-tests-var-expansion-dbksc deletion completed in 6.140711278s • [SLOW TEST:15.598 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:33:19.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 15:33:19.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-rksl4' May 1 15:33:22.308: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 15:33:22.308: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 1 15:33:26.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-rksl4' May 1 15:33:26.556: INFO: stderr: "" May 1 15:33:26.556: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:33:26.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rksl4" for this suite. May 1 15:33:50.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:33:50.586: INFO: namespace: e2e-tests-kubectl-rksl4, resource: bindings, ignored listing per whitelist May 1 15:33:50.769: INFO: namespace e2e-tests-kubectl-rksl4 deletion completed in 24.209910626s • [SLOW TEST:31.183 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:33:50.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 1 15:33:50.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-z6z4t' May 1 15:33:51.276: INFO: stderr: "" May 1 15:33:51.276: INFO: stdout: "pod/pause created\n" May 1 15:33:51.276: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 1 15:33:51.276: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-z6z4t" to be "running and ready" May 1 15:33:51.319: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 42.440766ms May 1 15:33:53.323: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046205839s May 1 15:33:55.326: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.05010662s May 1 15:33:55.327: INFO: Pod "pause" satisfied condition "running and ready" May 1 15:33:55.327: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 1 15:33:55.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-z6z4t' May 1 15:33:55.447: INFO: stderr: "" May 1 15:33:55.447: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 1 15:33:55.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-z6z4t' May 1 15:33:55.618: INFO: stderr: "" May 1 15:33:55.618: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 1 15:33:55.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-z6z4t' May 1 15:33:55.849: INFO: stderr: "" May 1 15:33:55.849: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 1 15:33:55.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-z6z4t' May 1 15:33:55.954: INFO: stderr: "" May 1 15:33:55.954: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 1 15:33:55.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-z6z4t' May 1 15:33:56.094: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:33:56.094: INFO: stdout: "pod \"pause\" force deleted\n" May 1 15:33:56.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-z6z4t' May 1 15:33:56.281: INFO: stderr: "No resources found.\n" May 1 15:33:56.282: INFO: stdout: "" May 1 15:33:56.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-z6z4t -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 15:33:56.387: INFO: stderr: "" May 1 15:33:56.387: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:33:56.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z6z4t" for this suite. May 1 15:34:02.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:34:02.998: INFO: namespace: e2e-tests-kubectl-z6z4t, resource: bindings, ignored listing per whitelist May 1 15:34:03.022: INFO: namespace e2e-tests-kubectl-z6z4t deletion completed in 6.63183471s • [SLOW TEST:12.253 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:34:03.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 1 15:34:03.131: INFO: namespace e2e-tests-kubectl-nrfhz May 1 15:34:03.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nrfhz' May 1 15:34:03.549: INFO: stderr: "" May 1 15:34:03.549: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 1 15:34:04.553: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:04.553: INFO: Found 0 / 1 May 1 15:34:05.552: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:05.552: INFO: Found 0 / 1 May 1 15:34:06.553: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:06.553: INFO: Found 0 / 1 May 1 15:34:07.626: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:07.626: INFO: Found 0 / 1 May 1 15:34:08.553: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:08.553: INFO: Found 0 / 1 May 1 15:34:09.554: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:09.554: INFO: Found 1 / 1 May 1 15:34:09.554: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 15:34:09.557: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:09.557: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 15:34:09.557: INFO: wait on redis-master startup in e2e-tests-kubectl-nrfhz May 1 15:34:09.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-w5jw9 redis-master --namespace=e2e-tests-kubectl-nrfhz' May 1 15:34:09.676: INFO: stderr: "" May 1 15:34:09.676: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 May 15:34:08.524 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 May 15:34:08.524 # Server started, Redis version 3.2.12\n1:M 01 May 15:34:08.524 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 May 15:34:08.525 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 1 15:34:09.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-nrfhz' May 1 15:34:09.899: INFO: stderr: "" May 1 15:34:09.899: INFO: stdout: "service/rm2 exposed\n" May 1 15:34:09.911: INFO: Service rm2 in namespace e2e-tests-kubectl-nrfhz found. STEP: exposing service May 1 15:34:12.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-nrfhz' May 1 15:34:12.375: INFO: stderr: "" May 1 15:34:12.375: INFO: stdout: "service/rm3 exposed\n" May 1 15:34:12.657: INFO: Service rm3 in namespace e2e-tests-kubectl-nrfhz found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:34:14.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nrfhz" for this suite. May 1 15:34:38.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:34:38.987: INFO: namespace: e2e-tests-kubectl-nrfhz, resource: bindings, ignored listing per whitelist May 1 15:34:39.020: INFO: namespace e2e-tests-kubectl-nrfhz deletion completed in 24.351980202s • [SLOW TEST:35.997 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:34:39.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-2rtgt May 1 15:34:43.184: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-2rtgt STEP: checking the pod's current state and verifying that restartCount is present May 1 15:34:43.187: INFO: Initial restart count of pod liveness-http is 0 May 1 15:34:57.570: INFO: Restart count of pod e2e-tests-container-probe-2rtgt/liveness-http is now 1 (14.383160936s elapsed) May 1 15:35:17.680: INFO: Restart count of pod e2e-tests-container-probe-2rtgt/liveness-http is now 2 (34.492542282s elapsed) May 1 15:35:39.778: INFO: Restart count of pod e2e-tests-container-probe-2rtgt/liveness-http is now 3 (56.590504273s elapsed) May 1 15:36:00.116: INFO: Restart count of pod e2e-tests-container-probe-2rtgt/liveness-http is now 4 (1m16.929197383s elapsed) May 1 15:37:05.995: INFO: Restart count of pod e2e-tests-container-probe-2rtgt/liveness-http is now 5 (2m22.807483367s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:37:06.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2rtgt" for this suite. May 1 15:37:12.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:37:12.103: INFO: namespace: e2e-tests-container-probe-2rtgt, resource: bindings, ignored listing per whitelist May 1 15:37:12.107: INFO: namespace e2e-tests-container-probe-2rtgt deletion completed in 6.091648943s • [SLOW TEST:153.087 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:37:12.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 1 15:37:12.237: INFO: Pod name pod-release: Found 0 pods out of 1 May 1 15:37:17.275: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:37:18.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-7xn2t" for this suite. May 1 15:37:26.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:37:26.538: INFO: namespace: e2e-tests-replication-controller-7xn2t, resource: bindings, ignored listing per whitelist May 1 15:37:26.565: INFO: namespace e2e-tests-replication-controller-7xn2t deletion completed in 8.25075035s • [SLOW TEST:14.457 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:37:26.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a8e8b584-8bc1-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:37:26.730: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017" in namespace "e2e-tests-configmap-bccr5" to be "success or failure" May 1 15:37:26.756: INFO: Pod "pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 26.153236ms May 1 15:37:28.760: INFO: Pod "pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029873499s May 1 15:37:30.764: INFO: Pod "pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034388451s May 1 15:37:32.874: INFO: Pod "pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143806137s May 1 15:37:34.877: INFO: Pod "pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 8.146954906s May 1 15:37:36.881: INFO: Pod "pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150751315s STEP: Saw pod success May 1 15:37:36.881: INFO: Pod "pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:37:36.883: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 15:37:36.909: INFO: Waiting for pod pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017 to disappear May 1 15:37:36.932: INFO: Pod pod-configmaps-a8e9267a-8bc1-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:37:36.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bccr5" for this suite. May 1 15:37:44.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:37:45.001: INFO: namespace: e2e-tests-configmap-bccr5, resource: bindings, ignored listing per whitelist May 1 15:37:45.017: INFO: namespace e2e-tests-configmap-bccr5 deletion completed in 8.081136911s • [SLOW TEST:18.452 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:37:45.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 1 15:37:52.267: INFO: Successfully updated pod "annotationupdateb407c2d4-8bc1-11ea-acf7-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:37:54.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2p9sl" for this suite. May 1 15:38:16.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:38:16.419: INFO: namespace: e2e-tests-projected-2p9sl, resource: bindings, ignored listing per whitelist May 1 15:38:16.438: INFO: namespace e2e-tests-projected-2p9sl deletion completed in 22.127946024s • [SLOW TEST:31.421 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:38:16.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0501 15:38:18.911206 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 15:38:18.911: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:38:18.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-hlclj" for this suite. May 1 15:38:27.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:38:27.586: INFO: namespace: e2e-tests-gc-hlclj, resource: bindings, ignored listing per whitelist May 1 15:38:27.594: INFO: namespace e2e-tests-gc-hlclj deletion completed in 8.438673705s • [SLOW TEST:11.156 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:38:27.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:38:29.559: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 1 15:38:29.580: INFO: Number of nodes with available pods: 0 May 1 15:38:29.580: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 1 15:38:29.790: INFO: Number of nodes with available pods: 0 May 1 15:38:29.790: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:30.827: INFO: Number of nodes with available pods: 0 May 1 15:38:30.827: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:31.845: INFO: Number of nodes with available pods: 0 May 1 15:38:31.845: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:32.929: INFO: Number of nodes with available pods: 0 May 1 15:38:32.929: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:33.842: INFO: Number of nodes with available pods: 0 May 1 15:38:33.842: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:34.971: INFO: Number of nodes with available pods: 0 May 1 15:38:34.971: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:35.919: INFO: Number of nodes with available pods: 1 May 1 15:38:35.919: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 1 15:38:36.396: INFO: Number of nodes with available pods: 1 May 1 15:38:36.396: INFO: Number of running nodes: 0, number of available pods: 1 May 1 15:38:37.588: INFO: Number of nodes with available pods: 0 May 1 15:38:37.588: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 1 15:38:37.988: INFO: Number of nodes with available pods: 0 May 1 15:38:37.988: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:39.174: INFO: Number of nodes with available pods: 0 May 1 15:38:39.174: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:39.993: INFO: Number of nodes with available pods: 0 May 1 15:38:39.993: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:41.017: INFO: Number of nodes with available pods: 0 May 1 15:38:41.017: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:42.994: INFO: Number of nodes with available pods: 0 May 1 15:38:42.994: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:44.127: INFO: Number of nodes with available pods: 0 May 1 15:38:44.127: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:44.992: INFO: Number of nodes with available pods: 0 May 1 15:38:44.992: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:46.079: INFO: Number of nodes with available pods: 0 May 1 15:38:46.079: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:47.270: INFO: Number of nodes with available pods: 0 May 1 15:38:47.270: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:48.037: INFO: Number of nodes with available pods: 0 May 1 15:38:48.037: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:48.992: INFO: Number of nodes with available pods: 0 May 1 15:38:48.992: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:49.993: INFO: Number of nodes with available pods: 0 May 1 15:38:49.993: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:50.992: INFO: Number of nodes with available pods: 0 May 1 15:38:50.992: INFO: Node hunter-worker is running more than one daemon pod May 1 15:38:51.993: INFO: Number of nodes with available pods: 1 May 1 15:38:51.993: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-924f8, will wait for the garbage collector to delete the pods May 1 15:38:52.224: INFO: Deleting DaemonSet.extensions daemon-set took: 171.268213ms May 1 15:38:52.324: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.213304ms May 1 15:39:01.427: INFO: Number of nodes with available pods: 0 May 1 15:39:01.427: INFO: Number of running nodes: 0, number of available pods: 0 May 1 15:39:01.428: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-924f8/daemonsets","resourceVersion":"8194256"},"items":null} May 1 15:39:01.430: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-924f8/pods","resourceVersion":"8194256"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:39:01.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-924f8" for this suite. May 1 15:39:07.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:39:08.002: INFO: namespace: e2e-tests-daemonsets-924f8, resource: bindings, ignored listing per whitelist May 1 15:39:08.039: INFO: namespace e2e-tests-daemonsets-924f8 deletion completed in 6.459475332s • [SLOW TEST:40.444 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:39:08.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 15:39:20.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:39:20.857: INFO: Pod pod-with-prestop-http-hook still exists May 1 15:39:22.858: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:39:22.862: INFO: Pod pod-with-prestop-http-hook still exists May 1 15:39:24.858: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:39:25.167: INFO: Pod pod-with-prestop-http-hook still exists May 1 15:39:26.858: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:39:26.918: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:39:27.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-t8vn8" for this suite. May 1 15:39:51.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:39:51.367: INFO: namespace: e2e-tests-container-lifecycle-hook-t8vn8, resource: bindings, ignored listing per whitelist May 1 15:39:51.432: INFO: namespace e2e-tests-container-lifecycle-hook-t8vn8 deletion completed in 24.416158564s • [SLOW TEST:43.393 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:39:51.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:39:51.612: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 1 15:39:51.731: INFO: Pod name sample-pod: Found 0 pods out of 1 May 1 15:39:56.961: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 15:39:58.970: INFO: Creating deployment "test-rolling-update-deployment" May 1 15:39:58.974: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 1 15:39:59.140: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 1 15:40:01.147: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 1 15:40:01.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:40:03.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:40:05.289: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 15:40:05.544: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-c5f5k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c5f5k/deployments/test-rolling-update-deployment,UID:03ab9fb1-8bc2-11ea-99e8-0242ac110002,ResourceVersion:8194483,Generation:1,CreationTimestamp:2020-05-01 15:39:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-01 15:39:59 +0000 UTC 2020-05-01 15:39:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-01 15:40:03 +0000 UTC 2020-05-01 15:39:59 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 1 15:40:05.624: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-c5f5k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c5f5k/replicasets/test-rolling-update-deployment-75db98fb4c,UID:03c69b31-8bc2-11ea-99e8-0242ac110002,ResourceVersion:8194474,Generation:1,CreationTimestamp:2020-05-01 15:39:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 03ab9fb1-8bc2-11ea-99e8-0242ac110002 0xc002595567 0xc002595568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 1 15:40:05.624: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 1 15:40:05.624: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-c5f5k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c5f5k/replicasets/test-rolling-update-controller,UID:ff48f8cc-8bc1-11ea-99e8-0242ac110002,ResourceVersion:8194482,Generation:2,CreationTimestamp:2020-05-01 15:39:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 03ab9fb1-8bc2-11ea-99e8-0242ac110002 0xc002595497 0xc002595498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 15:40:05.628: INFO: Pod "test-rolling-update-deployment-75db98fb4c-l54cq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-l54cq,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-c5f5k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-c5f5k/pods/test-rolling-update-deployment-75db98fb4c-l54cq,UID:03cace80-8bc2-11ea-99e8-0242ac110002,ResourceVersion:8194472,Generation:0,CreationTimestamp:2020-05-01 15:39:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 03c69b31-8bc2-11ea-99e8-0242ac110002 0xc002330cb7 0xc002330cb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x2xf5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x2xf5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-x2xf5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002330d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002330d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:39:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:40:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:40:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:39:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.27,StartTime:2020-05-01 15:39:59 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-01 15:40:02 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://82deacc966c25723d80c88de38387b53d289a5f36b3f78bc4171fcd10bd05af4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:40:05.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-c5f5k" for this suite. May 1 15:40:13.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:40:13.908: INFO: namespace: e2e-tests-deployment-c5f5k, resource: bindings, ignored listing per whitelist May 1 15:40:13.936: INFO: namespace e2e-tests-deployment-c5f5k deletion completed in 8.305510349s • [SLOW TEST:22.504 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:40:13.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0cb573b0-8bc2-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:40:14.152: INFO: Waiting up to 5m0s for pod "pod-configmaps-0cb5f762-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-configmap-5g545" to be "success or failure" May 1 15:40:14.156: INFO: Pod "pod-configmaps-0cb5f762-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.444079ms May 1 15:40:16.511: INFO: Pod "pod-configmaps-0cb5f762-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358869193s May 1 15:40:18.649: INFO: Pod "pod-configmaps-0cb5f762-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497405961s May 1 15:40:20.653: INFO: Pod "pod-configmaps-0cb5f762-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.501431806s STEP: Saw pod success May 1 15:40:20.653: INFO: Pod "pod-configmaps-0cb5f762-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:40:20.656: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0cb5f762-8bc2-11ea-acf7-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 15:40:20.778: INFO: Waiting for pod pod-configmaps-0cb5f762-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:40:20.822: INFO: Pod pod-configmaps-0cb5f762-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:40:20.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5g545" for this suite. May 1 15:40:26.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:40:26.917: INFO: namespace: e2e-tests-configmap-5g545, resource: bindings, ignored listing per whitelist May 1 15:40:27.096: INFO: namespace e2e-tests-configmap-5g545 deletion completed in 6.270703102s • [SLOW TEST:13.160 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:40:27.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 1 15:40:34.168: INFO: Successfully updated pod "labelsupdate148bd50b-8bc2-11ea-acf7-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:40:36.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jt8vd" for this suite. May 1 15:40:58.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:40:58.893: INFO: namespace: e2e-tests-downward-api-jt8vd, resource: bindings, ignored listing per whitelist May 1 15:40:58.916: INFO: namespace e2e-tests-downward-api-jt8vd deletion completed in 22.363982931s • [SLOW TEST:31.819 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:40:58.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-277efa53-8bc2-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:40:59.100: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-277f758f-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-5jc27" to be "success or failure" May 1 15:40:59.109: INFO: Pod "pod-projected-configmaps-277f758f-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.236869ms May 1 15:41:01.114: INFO: Pod "pod-projected-configmaps-277f758f-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013558468s May 1 15:41:03.118: INFO: Pod "pod-projected-configmaps-277f758f-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017810635s STEP: Saw pod success May 1 15:41:03.118: INFO: Pod "pod-projected-configmaps-277f758f-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:41:03.121: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-277f758f-8bc2-11ea-acf7-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 15:41:03.225: INFO: Waiting for pod pod-projected-configmaps-277f758f-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:41:03.379: INFO: Pod pod-projected-configmaps-277f758f-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:41:03.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5jc27" for this suite. May 1 15:41:09.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:41:09.439: INFO: namespace: e2e-tests-projected-5jc27, resource: bindings, ignored listing per whitelist May 1 15:41:09.535: INFO: namespace e2e-tests-projected-5jc27 deletion completed in 6.152288962s • [SLOW TEST:10.618 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:41:09.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:41:11.056: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ea219fb-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-57256" to be "success or failure" May 1 15:41:11.183: INFO: Pod "downwardapi-volume-2ea219fb-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 126.482696ms May 1 15:41:13.188: INFO: Pod "downwardapi-volume-2ea219fb-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131175729s May 1 15:41:15.387: INFO: Pod "downwardapi-volume-2ea219fb-8bc2-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.330784281s May 1 15:41:17.390: INFO: Pod "downwardapi-volume-2ea219fb-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.333857698s STEP: Saw pod success May 1 15:41:17.390: INFO: Pod "downwardapi-volume-2ea219fb-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:41:17.393: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2ea219fb-8bc2-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:41:19.573: INFO: Waiting for pod downwardapi-volume-2ea219fb-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:41:20.056: INFO: Pod downwardapi-volume-2ea219fb-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:41:20.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-57256" for this suite. May 1 15:41:32.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:41:32.310: INFO: namespace: e2e-tests-projected-57256, resource: bindings, ignored listing per whitelist May 1 15:41:32.343: INFO: namespace e2e-tests-projected-57256 deletion completed in 12.282700452s • [SLOW TEST:22.808 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:41:32.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:41:41.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-p9947" for this suite. May 1 15:41:51.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:41:51.720: INFO: namespace: e2e-tests-emptydir-wrapper-p9947, resource: bindings, ignored listing per whitelist May 1 15:41:51.755: INFO: namespace e2e-tests-emptydir-wrapper-p9947 deletion completed in 10.110363657s • [SLOW TEST:19.412 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:41:51.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 15:41:52.389: INFO: Waiting up to 5m0s for pod "downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-2fg7v" to be "success or failure" May 1 15:41:52.596: INFO: Pod "downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 207.378396ms May 1 15:41:54.638: INFO: Pod "downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248507002s May 1 15:41:56.833: INFO: Pod "downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443910623s May 1 15:41:59.088: INFO: Pod "downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.698414808s May 1 15:42:01.091: INFO: Pod "downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.702210412s STEP: Saw pod success May 1 15:42:01.091: INFO: Pod "downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:42:01.094: INFO: Trying to get logs from node hunter-worker2 pod downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017 container dapi-container: STEP: delete the pod May 1 15:42:01.352: INFO: Waiting for pod downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:42:01.950: INFO: Pod downward-api-4738f9dd-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:42:01.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2fg7v" for this suite. May 1 15:42:08.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:42:08.581: INFO: namespace: e2e-tests-downward-api-2fg7v, resource: bindings, ignored listing per whitelist May 1 15:42:08.586: INFO: namespace e2e-tests-downward-api-2fg7v deletion completed in 6.53958353s • [SLOW TEST:16.830 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:42:08.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:42:08.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 1 15:42:09.560: INFO: stderr: "" May 1 15:42:09.560: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T14:47:52Z\", GoVersion:\"go1.11.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:42:09.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f2cgn" for this suite. May 1 15:42:15.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:42:15.752: INFO: namespace: e2e-tests-kubectl-f2cgn, resource: bindings, ignored listing per whitelist May 1 15:42:15.768: INFO: namespace e2e-tests-kubectl-f2cgn deletion completed in 6.070755234s • [SLOW TEST:7.182 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:42:15.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 1 15:42:16.112: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 1 15:42:16.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:16.598: INFO: stderr: "" May 1 15:42:16.598: INFO: stdout: "service/redis-slave created\n" May 1 15:42:16.598: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 1 15:42:16.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:16.936: INFO: stderr: "" May 1 15:42:16.936: INFO: stdout: "service/redis-master created\n" May 1 15:42:16.936: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 1 15:42:16.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:17.820: INFO: stderr: "" May 1 15:42:17.820: INFO: stdout: "service/frontend created\n" May 1 15:42:17.820: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 1 15:42:17.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:18.527: INFO: stderr: "" May 1 15:42:18.527: INFO: stdout: "deployment.extensions/frontend created\n" May 1 15:42:18.527: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 1 15:42:18.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:18.859: INFO: stderr: "" May 1 15:42:18.859: INFO: stdout: "deployment.extensions/redis-master created\n" May 1 15:42:18.859: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 1 15:42:18.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:19.238: INFO: stderr: "" May 1 15:42:19.238: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 1 15:42:19.238: INFO: Waiting for all frontend pods to be Running. May 1 15:42:34.289: INFO: Waiting for frontend to serve content. May 1 15:42:34.649: INFO: Trying to add a new entry to the guestbook. May 1 15:42:34.752: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 1 15:42:35.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:35.635: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:42:35.635: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 1 15:42:35.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:35.980: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:42:35.980: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 1 15:42:35.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:36.121: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:42:36.121: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 1 15:42:36.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:36.229: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:42:36.229: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 1 15:42:36.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:36.374: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:42:36.374: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 1 15:42:36.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-45bwn' May 1 15:42:36.751: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:42:36.751: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:42:36.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-45bwn" for this suite. May 1 15:43:17.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:43:17.407: INFO: namespace: e2e-tests-kubectl-45bwn, resource: bindings, ignored listing per whitelist May 1 15:43:17.446: INFO: namespace e2e-tests-kubectl-45bwn deletion completed in 40.344291483s • [SLOW TEST:61.677 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:43:17.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:43:17.563: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 1 15:43:22.567: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 15:43:22.567: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 1 15:43:24.572: INFO: Creating deployment "test-rollover-deployment" May 1 15:43:24.581: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 1 15:43:26.588: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 1 15:43:26.593: INFO: Ensure that both replica sets have 1 created replica May 1 15:43:26.599: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 1 15:43:26.605: INFO: Updating deployment test-rollover-deployment May 1 15:43:26.605: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 1 15:43:28.742: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 1 15:43:28.747: INFO: Make sure deployment "test-rollover-deployment" is complete May 1 15:43:28.800: INFO: all replica sets need to contain the pod-template-hash label May 1 15:43:28.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944606, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:43:30.872: INFO: all replica sets need to contain the pod-template-hash label May 1 15:43:30.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944606, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:43:32.834: INFO: all replica sets need to contain the pod-template-hash label May 1 15:43:32.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944611, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:43:34.806: INFO: all replica sets need to contain the pod-template-hash label May 1 15:43:34.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944611, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:43:36.808: INFO: all replica sets need to contain the pod-template-hash label May 1 15:43:36.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944611, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:43:38.837: INFO: all replica sets need to contain the pod-template-hash label May 1 15:43:38.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944611, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:43:43.549: INFO: all replica sets need to contain the pod-template-hash label May 1 15:43:43.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944611, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:43:46.287: INFO: May 1 15:43:46.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944624, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944604, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:43:47.689: INFO: May 1 15:43:47.689: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 15:43:47.731: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-gc95w,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gc95w/deployments/test-rollover-deployment,UID:7e380a39-8bc2-11ea-99e8-0242ac110002,ResourceVersion:8195339,Generation:2,CreationTimestamp:2020-05-01 15:43:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-01 15:43:24 +0000 UTC 2020-05-01 15:43:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-01 15:43:46 +0000 UTC 2020-05-01 15:43:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 1 15:43:47.734: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-gc95w,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gc95w/replicasets/test-rollover-deployment-5b8479fdb6,UID:7f6e351c-8bc2-11ea-99e8-0242ac110002,ResourceVersion:8195325,Generation:2,CreationTimestamp:2020-05-01 15:43:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7e380a39-8bc2-11ea-99e8-0242ac110002 0xc001fd6e17 0xc001fd6e18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 1 15:43:47.734: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 1 15:43:47.734: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-gc95w,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gc95w/replicasets/test-rollover-controller,UID:7a076202-8bc2-11ea-99e8-0242ac110002,ResourceVersion:8195336,Generation:2,CreationTimestamp:2020-05-01 15:43:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7e380a39-8bc2-11ea-99e8-0242ac110002 0xc001fd6c87 0xc001fd6c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 15:43:47.735: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-gc95w,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gc95w/replicasets/test-rollover-deployment-58494b7559,UID:7e3a775c-8bc2-11ea-99e8-0242ac110002,ResourceVersion:8195284,Generation:2,CreationTimestamp:2020-05-01 15:43:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7e380a39-8bc2-11ea-99e8-0242ac110002 0xc001fd6d47 0xc001fd6d48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 15:43:47.738: INFO: Pod "test-rollover-deployment-5b8479fdb6-dzkrv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-dzkrv,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-gc95w,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gc95w/pods/test-rollover-deployment-5b8479fdb6-dzkrv,UID:7f7c6f32-8bc2-11ea-99e8-0242ac110002,ResourceVersion:8195301,Generation:0,CreationTimestamp:2020-05-01 15:43:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 7f6e351c-8bc2-11ea-99e8-0242ac110002 0xc001fd7ac7 0xc001fd7ac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q65gr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q65gr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-q65gr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fd7b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fd7b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:43:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:43:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:43:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:43:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.35,StartTime:2020-05-01 15:43:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-01 15:43:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://bfc20dfcf3de333cddf5c736700b5e44d95bcb118307b3ef5d43f8f206ac22f7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:43:47.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gc95w" for this suite. May 1 15:44:00.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:44:00.343: INFO: namespace: e2e-tests-deployment-gc95w, resource: bindings, ignored listing per whitelist May 1 15:44:00.804: INFO: namespace e2e-tests-deployment-gc95w deletion completed in 12.793000246s • [SLOW TEST:43.358 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:44:00.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 1 15:44:02.122: INFO: Waiting up to 5m0s for pod "pod-943ab638-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-2t4mt" to be "success or failure" May 1 15:44:02.425: INFO: Pod "pod-943ab638-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 303.088369ms May 1 15:44:04.429: INFO: Pod "pod-943ab638-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.306217947s May 1 15:44:06.436: INFO: Pod "pod-943ab638-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313691036s STEP: Saw pod success May 1 15:44:06.436: INFO: Pod "pod-943ab638-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:44:06.438: INFO: Trying to get logs from node hunter-worker pod pod-943ab638-8bc2-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 15:44:06.488: INFO: Waiting for pod pod-943ab638-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:44:06.498: INFO: Pod pod-943ab638-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:44:06.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2t4mt" for this suite. May 1 15:44:12.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:44:12.544: INFO: namespace: e2e-tests-emptydir-2t4mt, resource: bindings, ignored listing per whitelist May 1 15:44:12.621: INFO: namespace e2e-tests-emptydir-2t4mt deletion completed in 6.120439579s • [SLOW TEST:11.817 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:44:12.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9aee8c88-8bc2-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:44:12.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-9af21503-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-configmap-ckm4g" to be "success or failure" May 1 15:44:12.838: INFO: Pod "pod-configmaps-9af21503-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 43.918992ms May 1 15:44:14.841: INFO: Pod "pod-configmaps-9af21503-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047569435s May 1 15:44:16.845: INFO: Pod "pod-configmaps-9af21503-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051385378s STEP: Saw pod success May 1 15:44:16.845: INFO: Pod "pod-configmaps-9af21503-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:44:16.848: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9af21503-8bc2-11ea-acf7-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 15:44:16.884: INFO: Waiting for pod pod-configmaps-9af21503-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:44:16.899: INFO: Pod pod-configmaps-9af21503-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:44:16.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ckm4g" for this suite. May 1 15:44:23.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:44:23.350: INFO: namespace: e2e-tests-configmap-ckm4g, resource: bindings, ignored listing per whitelist May 1 15:44:23.352: INFO: namespace e2e-tests-configmap-ckm4g deletion completed in 6.449248628s • [SLOW TEST:10.731 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:44:23.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:44:23.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a18f2476-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-7w6xv" to be "success or failure" May 1 15:44:23.906: INFO: Pod "downwardapi-volume-a18f2476-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 34.208519ms May 1 15:44:26.251: INFO: Pod "downwardapi-volume-a18f2476-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37971979s May 1 15:44:28.255: INFO: Pod "downwardapi-volume-a18f2476-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383265813s May 1 15:44:30.314: INFO: Pod "downwardapi-volume-a18f2476-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.442658562s STEP: Saw pod success May 1 15:44:30.314: INFO: Pod "downwardapi-volume-a18f2476-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:44:30.318: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a18f2476-8bc2-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:44:30.636: INFO: Waiting for pod downwardapi-volume-a18f2476-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:44:30.659: INFO: Pod downwardapi-volume-a18f2476-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:44:30.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7w6xv" for this suite. May 1 15:44:40.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:44:40.968: INFO: namespace: e2e-tests-downward-api-7w6xv, resource: bindings, ignored listing per whitelist May 1 15:44:41.049: INFO: namespace e2e-tests-downward-api-7w6xv deletion completed in 10.382458542s • [SLOW TEST:17.697 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:44:41.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-abe614c2-8bc2-11ea-acf7-0242ac110017 STEP: Creating secret with name secret-projected-all-test-volume-abe61496-8bc2-11ea-acf7-0242ac110017 STEP: Creating a pod to test Check all projections for projected volume plugin May 1 15:44:41.281: INFO: Waiting up to 5m0s for pod "projected-volume-abe61437-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-xqcg2" to be "success or failure" May 1 15:44:41.377: INFO: Pod "projected-volume-abe61437-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 96.383374ms May 1 15:44:43.382: INFO: Pod "projected-volume-abe61437-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100944999s May 1 15:44:45.386: INFO: Pod "projected-volume-abe61437-8bc2-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.105101556s May 1 15:44:47.390: INFO: Pod "projected-volume-abe61437-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108892035s STEP: Saw pod success May 1 15:44:47.390: INFO: Pod "projected-volume-abe61437-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:44:47.392: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-abe61437-8bc2-11ea-acf7-0242ac110017 container projected-all-volume-test: STEP: delete the pod May 1 15:44:47.644: INFO: Waiting for pod projected-volume-abe61437-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:44:47.784: INFO: Pod projected-volume-abe61437-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:44:47.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xqcg2" for this suite. May 1 15:44:56.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:44:56.517: INFO: namespace: e2e-tests-projected-xqcg2, resource: bindings, ignored listing per whitelist May 1 15:44:56.572: INFO: namespace e2e-tests-projected-xqcg2 deletion completed in 8.694044308s • [SLOW TEST:15.522 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:44:56.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 1 15:44:57.699: INFO: Waiting up to 5m0s for pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh" in namespace "e2e-tests-svcaccounts-4w7zl" to be "success or failure" May 1 15:44:57.702: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.908706ms May 1 15:44:59.707: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007456738s May 1 15:45:01.711: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011677587s May 1 15:45:03.821: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121436381s May 1 15:45:05.825: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125928974s May 1 15:45:07.829: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.129715678s May 1 15:45:09.833: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.133854968s STEP: Saw pod success May 1 15:45:09.833: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh" satisfied condition "success or failure" May 1 15:45:09.836: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh container token-test: STEP: delete the pod May 1 15:45:09.876: INFO: Waiting for pod pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh to disappear May 1 15:45:09.889: INFO: Pod pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-kqtnh no longer exists STEP: Creating a pod to test consume service account root CA May 1 15:45:09.893: INFO: Waiting up to 5m0s for pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm" in namespace "e2e-tests-svcaccounts-4w7zl" to be "success or failure" May 1 15:45:09.911: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm": Phase="Pending", Reason="", readiness=false. Elapsed: 17.728473ms May 1 15:45:11.914: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021140736s May 1 15:45:14.288: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.39530436s May 1 15:45:16.450: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.556964809s May 1 15:45:18.454: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.561076881s STEP: Saw pod success May 1 15:45:18.454: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm" satisfied condition "success or failure" May 1 15:45:18.457: INFO: Trying to get logs from node hunter-worker pod pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm container root-ca-test: STEP: delete the pod May 1 15:45:18.532: INFO: Waiting for pod pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm to disappear May 1 15:45:18.536: INFO: Pod pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-fvxvm no longer exists STEP: Creating a pod to test consume service account namespace May 1 15:45:18.555: INFO: Waiting up to 5m0s for pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t" in namespace "e2e-tests-svcaccounts-4w7zl" to be "success or failure" May 1 15:45:18.574: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t": Phase="Pending", Reason="", readiness=false. Elapsed: 19.346356ms May 1 15:45:20.578: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02358392s May 1 15:45:22.583: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028276069s May 1 15:45:24.587: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031773714s May 1 15:45:26.590: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035377303s STEP: Saw pod success May 1 15:45:26.590: INFO: Pod "pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t" satisfied condition "success or failure" May 1 15:45:26.592: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t container namespace-test: STEP: delete the pod May 1 15:45:26.641: INFO: Waiting for pod pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t to disappear May 1 15:45:26.661: INFO: Pod pod-service-account-b5b8f447-8bc2-11ea-acf7-0242ac110017-nnx5t no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:45:26.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-4w7zl" for this suite. May 1 15:45:34.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:45:34.796: INFO: namespace: e2e-tests-svcaccounts-4w7zl, resource: bindings, ignored listing per whitelist May 1 15:45:34.798: INFO: namespace e2e-tests-svcaccounts-4w7zl deletion completed in 8.132968383s • [SLOW TEST:38.225 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:45:34.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 1 15:45:39.438: INFO: Successfully updated pod "pod-update-cbe6b2ed-8bc2-11ea-acf7-0242ac110017" STEP: verifying the updated pod is in kubernetes May 1 15:45:39.450: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:45:39.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wfnx9" for this suite. May 1 15:46:01.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:46:01.500: INFO: namespace: e2e-tests-pods-wfnx9, resource: bindings, ignored listing per whitelist May 1 15:46:01.575: INFO: namespace e2e-tests-pods-wfnx9 deletion completed in 22.121384777s • [SLOW TEST:26.777 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:46:01.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 15:46:01.706: INFO: Waiting up to 5m0s for pod "pod-dbdde7ee-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-dkbph" to be "success or failure" May 1 15:46:01.716: INFO: Pod "pod-dbdde7ee-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.300938ms May 1 15:46:03.864: INFO: Pod "pod-dbdde7ee-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157155517s May 1 15:46:06.011: INFO: Pod "pod-dbdde7ee-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.304776857s STEP: Saw pod success May 1 15:46:06.011: INFO: Pod "pod-dbdde7ee-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:46:06.014: INFO: Trying to get logs from node hunter-worker pod pod-dbdde7ee-8bc2-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 15:46:06.266: INFO: Waiting for pod pod-dbdde7ee-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:46:06.285: INFO: Pod pod-dbdde7ee-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:46:06.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dkbph" for this suite. May 1 15:46:12.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:46:12.548: INFO: namespace: e2e-tests-emptydir-dkbph, resource: bindings, ignored listing per whitelist May 1 15:46:12.617: INFO: namespace e2e-tests-emptydir-dkbph deletion completed in 6.114373125s • [SLOW TEST:11.041 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:46:12.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e27248e1-8bc2-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:46:12.738: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e272e757-8bc2-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-gctb7" to be "success or failure" May 1 15:46:12.742: INFO: Pod "pod-projected-configmaps-e272e757-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.876674ms May 1 15:46:14.746: INFO: Pod "pod-projected-configmaps-e272e757-8bc2-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007994144s May 1 15:46:16.750: INFO: Pod "pod-projected-configmaps-e272e757-8bc2-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012473069s STEP: Saw pod success May 1 15:46:16.750: INFO: Pod "pod-projected-configmaps-e272e757-8bc2-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:46:16.753: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-e272e757-8bc2-11ea-acf7-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 15:46:16.772: INFO: Waiting for pod pod-projected-configmaps-e272e757-8bc2-11ea-acf7-0242ac110017 to disappear May 1 15:46:16.777: INFO: Pod pod-projected-configmaps-e272e757-8bc2-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:46:16.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gctb7" for this suite. May 1 15:46:22.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:46:22.809: INFO: namespace: e2e-tests-projected-gctb7, resource: bindings, ignored listing per whitelist May 1 15:46:22.865: INFO: namespace e2e-tests-projected-gctb7 deletion completed in 6.084865533s • [SLOW TEST:10.248 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:46:22.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:46:29.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-mkrsj" for this suite. May 1 15:47:09.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:47:09.774: INFO: namespace: e2e-tests-kubelet-test-mkrsj, resource: bindings, ignored listing per whitelist May 1 15:47:09.847: INFO: namespace e2e-tests-kubelet-test-mkrsj deletion completed in 40.220942986s • [SLOW TEST:46.981 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:47:09.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-t9l5l/configmap-test-04d46d9e-8bc3-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:47:10.449: INFO: Waiting up to 5m0s for pod "pod-configmaps-04d7dc14-8bc3-11ea-acf7-0242ac110017" in namespace "e2e-tests-configmap-t9l5l" to be "success or failure" May 1 15:47:10.466: INFO: Pod "pod-configmaps-04d7dc14-8bc3-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.772968ms May 1 15:47:12.471: INFO: Pod "pod-configmaps-04d7dc14-8bc3-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021339896s May 1 15:47:14.474: INFO: Pod "pod-configmaps-04d7dc14-8bc3-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.025165062s May 1 15:47:16.479: INFO: Pod "pod-configmaps-04d7dc14-8bc3-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029243104s STEP: Saw pod success May 1 15:47:16.479: INFO: Pod "pod-configmaps-04d7dc14-8bc3-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:47:16.482: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-04d7dc14-8bc3-11ea-acf7-0242ac110017 container env-test: STEP: delete the pod May 1 15:47:16.557: INFO: Waiting for pod pod-configmaps-04d7dc14-8bc3-11ea-acf7-0242ac110017 to disappear May 1 15:47:16.562: INFO: Pod pod-configmaps-04d7dc14-8bc3-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:47:16.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-t9l5l" for this suite. May 1 15:47:22.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:47:22.591: INFO: namespace: e2e-tests-configmap-t9l5l, resource: bindings, ignored listing per whitelist May 1 15:47:22.650: INFO: namespace e2e-tests-configmap-t9l5l deletion completed in 6.084384817s • [SLOW TEST:12.803 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:47:22.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2x6s2 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 1 15:47:23.126: INFO: Found 0 stateful pods, waiting for 3 May 1 15:47:33.159: INFO: Found 2 stateful pods, waiting for 3 May 1 15:47:43.578: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 15:47:43.578: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 15:47:43.578: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 1 15:47:53.132: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 15:47:53.132: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 15:47:53.132: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 1 15:47:53.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2x6s2 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 15:47:53.558: INFO: stderr: "I0501 15:47:53.263008 1060 log.go:172] (0xc000888210) (0xc0008845a0) Create stream\nI0501 15:47:53.263067 1060 log.go:172] (0xc000888210) (0xc0008845a0) Stream added, broadcasting: 1\nI0501 15:47:53.265091 1060 log.go:172] (0xc000888210) Reply frame received for 1\nI0501 15:47:53.265255 1060 log.go:172] (0xc000888210) (0xc0006c0d20) Create stream\nI0501 15:47:53.265269 1060 log.go:172] (0xc000888210) (0xc0006c0d20) Stream added, broadcasting: 3\nI0501 15:47:53.266112 1060 log.go:172] (0xc000888210) Reply frame received for 3\nI0501 15:47:53.266158 1060 log.go:172] (0xc000888210) (0xc0001f0000) Create stream\nI0501 15:47:53.266174 1060 log.go:172] (0xc000888210) (0xc0001f0000) Stream added, broadcasting: 5\nI0501 15:47:53.266880 1060 log.go:172] (0xc000888210) Reply frame received for 5\nI0501 15:47:53.552387 1060 log.go:172] (0xc000888210) Data frame received for 3\nI0501 15:47:53.552422 1060 log.go:172] (0xc0006c0d20) (3) Data frame handling\nI0501 15:47:53.552437 1060 log.go:172] (0xc0006c0d20) (3) Data frame sent\nI0501 15:47:53.552623 1060 log.go:172] (0xc000888210) Data frame received for 5\nI0501 15:47:53.552684 1060 log.go:172] (0xc0001f0000) (5) Data frame handling\nI0501 15:47:53.552743 1060 log.go:172] (0xc000888210) Data frame received for 3\nI0501 15:47:53.552758 1060 log.go:172] (0xc0006c0d20) (3) Data frame handling\nI0501 15:47:53.554327 1060 log.go:172] (0xc000888210) Data frame received for 1\nI0501 15:47:53.554348 1060 log.go:172] (0xc0008845a0) (1) Data frame handling\nI0501 15:47:53.554358 1060 log.go:172] (0xc0008845a0) (1) Data frame sent\nI0501 15:47:53.554368 1060 log.go:172] (0xc000888210) (0xc0008845a0) Stream removed, broadcasting: 1\nI0501 15:47:53.554386 1060 log.go:172] (0xc000888210) Go away received\nI0501 15:47:53.554546 1060 log.go:172] (0xc000888210) (0xc0008845a0) Stream removed, broadcasting: 1\nI0501 15:47:53.554561 1060 log.go:172] (0xc000888210) (0xc0006c0d20) Stream removed, broadcasting: 3\nI0501 15:47:53.554571 1060 log.go:172] (0xc000888210) (0xc0001f0000) Stream removed, broadcasting: 5\n" May 1 15:47:53.558: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 15:47:53.558: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 1 15:48:03.594: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 1 15:48:14.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2x6s2 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 15:48:14.527: INFO: stderr: "I0501 15:48:14.451577 1081 log.go:172] (0xc000844210) (0xc0000d8be0) Create stream\nI0501 15:48:14.451645 1081 log.go:172] (0xc000844210) (0xc0000d8be0) Stream added, broadcasting: 1\nI0501 15:48:14.456767 1081 log.go:172] (0xc000844210) Reply frame received for 1\nI0501 15:48:14.456813 1081 log.go:172] (0xc000844210) (0xc0000d8d20) Create stream\nI0501 15:48:14.456825 1081 log.go:172] (0xc000844210) (0xc0000d8d20) Stream added, broadcasting: 3\nI0501 15:48:14.458133 1081 log.go:172] (0xc000844210) Reply frame received for 3\nI0501 15:48:14.458167 1081 log.go:172] (0xc000844210) (0xc0000d8dc0) Create stream\nI0501 15:48:14.458184 1081 log.go:172] (0xc000844210) (0xc0000d8dc0) Stream added, broadcasting: 5\nI0501 15:48:14.459410 1081 log.go:172] (0xc000844210) Reply frame received for 5\nI0501 15:48:14.521660 1081 log.go:172] (0xc000844210) Data frame received for 5\nI0501 15:48:14.521703 1081 log.go:172] (0xc0000d8dc0) (5) Data frame handling\nI0501 15:48:14.521801 1081 log.go:172] (0xc000844210) Data frame received for 3\nI0501 15:48:14.521833 1081 log.go:172] (0xc0000d8d20) (3) Data frame handling\nI0501 15:48:14.521853 1081 log.go:172] (0xc0000d8d20) (3) Data frame sent\nI0501 15:48:14.521865 1081 log.go:172] (0xc000844210) Data frame received for 3\nI0501 15:48:14.521877 1081 log.go:172] (0xc0000d8d20) (3) Data frame handling\nI0501 15:48:14.523413 1081 log.go:172] (0xc000844210) Data frame received for 1\nI0501 15:48:14.523446 1081 log.go:172] (0xc0000d8be0) (1) Data frame handling\nI0501 15:48:14.523465 1081 log.go:172] (0xc0000d8be0) (1) Data frame sent\nI0501 15:48:14.523500 1081 log.go:172] (0xc000844210) (0xc0000d8be0) Stream removed, broadcasting: 1\nI0501 15:48:14.523579 1081 log.go:172] (0xc000844210) Go away received\nI0501 15:48:14.523837 1081 log.go:172] (0xc000844210) (0xc0000d8be0) Stream removed, broadcasting: 1\nI0501 15:48:14.523879 1081 log.go:172] (0xc000844210) (0xc0000d8d20) Stream removed, broadcasting: 3\nI0501 15:48:14.523900 1081 log.go:172] (0xc000844210) (0xc0000d8dc0) Stream removed, broadcasting: 5\n" May 1 15:48:14.527: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 15:48:14.527: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 15:48:24.617: INFO: Waiting for StatefulSet e2e-tests-statefulset-2x6s2/ss2 to complete update May 1 15:48:24.617: INFO: Waiting for Pod e2e-tests-statefulset-2x6s2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 15:48:24.617: INFO: Waiting for Pod e2e-tests-statefulset-2x6s2/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 15:48:34.850: INFO: Waiting for StatefulSet e2e-tests-statefulset-2x6s2/ss2 to complete update May 1 15:48:34.850: INFO: Waiting for Pod e2e-tests-statefulset-2x6s2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 15:48:44.625: INFO: Waiting for StatefulSet e2e-tests-statefulset-2x6s2/ss2 to complete update May 1 15:48:44.625: INFO: Waiting for Pod e2e-tests-statefulset-2x6s2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 15:48:55.050: INFO: Waiting for StatefulSet e2e-tests-statefulset-2x6s2/ss2 to complete update STEP: Rolling back to a previous revision May 1 15:49:04.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2x6s2 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 15:49:05.027: INFO: stderr: "I0501 15:49:04.790720 1104 log.go:172] (0xc00015e840) (0xc000657360) Create stream\nI0501 15:49:04.790790 1104 log.go:172] (0xc00015e840) (0xc000657360) Stream added, broadcasting: 1\nI0501 15:49:04.793358 1104 log.go:172] (0xc00015e840) Reply frame received for 1\nI0501 15:49:04.793407 1104 log.go:172] (0xc00015e840) (0xc000700000) Create stream\nI0501 15:49:04.793426 1104 log.go:172] (0xc00015e840) (0xc000700000) Stream added, broadcasting: 3\nI0501 15:49:04.794227 1104 log.go:172] (0xc00015e840) Reply frame received for 3\nI0501 15:49:04.794257 1104 log.go:172] (0xc00015e840) (0xc000657400) Create stream\nI0501 15:49:04.794265 1104 log.go:172] (0xc00015e840) (0xc000657400) Stream added, broadcasting: 5\nI0501 15:49:04.795057 1104 log.go:172] (0xc00015e840) Reply frame received for 5\nI0501 15:49:05.022066 1104 log.go:172] (0xc00015e840) Data frame received for 3\nI0501 15:49:05.022097 1104 log.go:172] (0xc000700000) (3) Data frame handling\nI0501 15:49:05.022113 1104 log.go:172] (0xc000700000) (3) Data frame sent\nI0501 15:49:05.022163 1104 log.go:172] (0xc00015e840) Data frame received for 3\nI0501 15:49:05.022176 1104 log.go:172] (0xc000700000) (3) Data frame handling\nI0501 15:49:05.022491 1104 log.go:172] (0xc00015e840) Data frame received for 5\nI0501 15:49:05.022520 1104 log.go:172] (0xc000657400) (5) Data frame handling\nI0501 15:49:05.024137 1104 log.go:172] (0xc00015e840) Data frame received for 1\nI0501 15:49:05.024196 1104 log.go:172] (0xc000657360) (1) Data frame handling\nI0501 15:49:05.024237 1104 log.go:172] (0xc000657360) (1) Data frame sent\nI0501 15:49:05.024371 1104 log.go:172] (0xc00015e840) (0xc000657360) Stream removed, broadcasting: 1\nI0501 15:49:05.024570 1104 log.go:172] (0xc00015e840) (0xc000657360) Stream removed, broadcasting: 1\nI0501 15:49:05.024588 1104 log.go:172] (0xc00015e840) (0xc000700000) Stream removed, broadcasting: 3\nI0501 15:49:05.024596 1104 log.go:172] (0xc00015e840) (0xc000657400) Stream removed, broadcasting: 5\n" May 1 15:49:05.027: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 15:49:05.027: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 15:49:15.058: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 1 15:49:25.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2x6s2 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 15:49:25.415: INFO: stderr: "I0501 15:49:25.345014 1126 log.go:172] (0xc0008462c0) (0xc0006554a0) Create stream\nI0501 15:49:25.345089 1126 log.go:172] (0xc0008462c0) (0xc0006554a0) Stream added, broadcasting: 1\nI0501 15:49:25.347489 1126 log.go:172] (0xc0008462c0) Reply frame received for 1\nI0501 15:49:25.347533 1126 log.go:172] (0xc0008462c0) (0xc000655540) Create stream\nI0501 15:49:25.347544 1126 log.go:172] (0xc0008462c0) (0xc000655540) Stream added, broadcasting: 3\nI0501 15:49:25.348479 1126 log.go:172] (0xc0008462c0) Reply frame received for 3\nI0501 15:49:25.348534 1126 log.go:172] (0xc0008462c0) (0xc0001d6000) Create stream\nI0501 15:49:25.348555 1126 log.go:172] (0xc0008462c0) (0xc0001d6000) Stream added, broadcasting: 5\nI0501 15:49:25.349569 1126 log.go:172] (0xc0008462c0) Reply frame received for 5\nI0501 15:49:25.410557 1126 log.go:172] (0xc0008462c0) Data frame received for 5\nI0501 15:49:25.410599 1126 log.go:172] (0xc0001d6000) (5) Data frame handling\nI0501 15:49:25.410623 1126 log.go:172] (0xc0008462c0) Data frame received for 3\nI0501 15:49:25.410631 1126 log.go:172] (0xc000655540) (3) Data frame handling\nI0501 15:49:25.410641 1126 log.go:172] (0xc000655540) (3) Data frame sent\nI0501 15:49:25.410649 1126 log.go:172] (0xc0008462c0) Data frame received for 3\nI0501 15:49:25.410656 1126 log.go:172] (0xc000655540) (3) Data frame handling\nI0501 15:49:25.411922 1126 log.go:172] (0xc0008462c0) Data frame received for 1\nI0501 15:49:25.411943 1126 log.go:172] (0xc0006554a0) (1) Data frame handling\nI0501 15:49:25.411951 1126 log.go:172] (0xc0006554a0) (1) Data frame sent\nI0501 15:49:25.411966 1126 log.go:172] (0xc0008462c0) (0xc0006554a0) Stream removed, broadcasting: 1\nI0501 15:49:25.412091 1126 log.go:172] (0xc0008462c0) (0xc0006554a0) Stream removed, broadcasting: 1\nI0501 15:49:25.412101 1126 log.go:172] (0xc0008462c0) (0xc000655540) Stream removed, broadcasting: 3\nI0501 15:49:25.412107 1126 log.go:172] (0xc0008462c0) (0xc0001d6000) Stream removed, broadcasting: 5\n" May 1 15:49:25.416: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 15:49:25.416: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 15:49:35.520: INFO: Waiting for StatefulSet e2e-tests-statefulset-2x6s2/ss2 to complete update May 1 15:49:35.520: INFO: Waiting for Pod e2e-tests-statefulset-2x6s2/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 15:49:35.520: INFO: Waiting for Pod e2e-tests-statefulset-2x6s2/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 15:49:45.564: INFO: Waiting for StatefulSet e2e-tests-statefulset-2x6s2/ss2 to complete update May 1 15:49:45.564: INFO: Waiting for Pod e2e-tests-statefulset-2x6s2/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 15:49:45.564: INFO: Waiting for Pod e2e-tests-statefulset-2x6s2/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 15:49:55.527: INFO: Waiting for StatefulSet e2e-tests-statefulset-2x6s2/ss2 to complete update May 1 15:49:55.527: INFO: Waiting for Pod e2e-tests-statefulset-2x6s2/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 15:50:05.529: INFO: Waiting for StatefulSet e2e-tests-statefulset-2x6s2/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 15:50:15.528: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2x6s2 May 1 15:50:15.531: INFO: Scaling statefulset ss2 to 0 May 1 15:50:45.687: INFO: Waiting for statefulset status.replicas updated to 0 May 1 15:50:45.690: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:50:45.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2x6s2" for this suite. May 1 15:50:53.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:50:53.778: INFO: namespace: e2e-tests-statefulset-2x6s2, resource: bindings, ignored listing per whitelist May 1 15:50:53.855: INFO: namespace e2e-tests-statefulset-2x6s2 deletion completed in 8.126961258s • [SLOW TEST:211.205 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:50:53.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 1 15:50:54.020: INFO: Waiting up to 5m0s for pod "client-containers-8a1b1130-8bc3-11ea-acf7-0242ac110017" in namespace "e2e-tests-containers-w9cr7" to be "success or failure" May 1 15:50:54.042: INFO: Pod "client-containers-8a1b1130-8bc3-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.817738ms May 1 15:50:56.047: INFO: Pod "client-containers-8a1b1130-8bc3-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026877599s May 1 15:50:58.051: INFO: Pod "client-containers-8a1b1130-8bc3-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030948002s May 1 15:51:00.055: INFO: Pod "client-containers-8a1b1130-8bc3-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034585244s STEP: Saw pod success May 1 15:51:00.055: INFO: Pod "client-containers-8a1b1130-8bc3-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:51:00.057: INFO: Trying to get logs from node hunter-worker pod client-containers-8a1b1130-8bc3-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 15:51:00.094: INFO: Waiting for pod client-containers-8a1b1130-8bc3-11ea-acf7-0242ac110017 to disappear May 1 15:51:00.147: INFO: Pod client-containers-8a1b1130-8bc3-11ea-acf7-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:51:00.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-w9cr7" for this suite. May 1 15:51:06.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:51:06.446: INFO: namespace: e2e-tests-containers-w9cr7, resource: bindings, ignored listing per whitelist May 1 15:51:06.471: INFO: namespace e2e-tests-containers-w9cr7 deletion completed in 6.319985149s • [SLOW TEST:12.616 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:51:06.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 1 15:51:06.597: INFO: Waiting up to 5m0s for pod "pod-919a7395-8bc3-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-d7x97" to be "success or failure" May 1 15:51:06.617: INFO: Pod "pod-919a7395-8bc3-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.935764ms May 1 15:51:08.621: INFO: Pod "pod-919a7395-8bc3-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023948736s May 1 15:51:10.625: INFO: Pod "pod-919a7395-8bc3-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028065397s STEP: Saw pod success May 1 15:51:10.626: INFO: Pod "pod-919a7395-8bc3-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:51:10.628: INFO: Trying to get logs from node hunter-worker2 pod pod-919a7395-8bc3-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 15:51:10.745: INFO: Waiting for pod pod-919a7395-8bc3-11ea-acf7-0242ac110017 to disappear May 1 15:51:10.764: INFO: Pod pod-919a7395-8bc3-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:51:10.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d7x97" for this suite. May 1 15:51:17.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:51:17.268: INFO: namespace: e2e-tests-emptydir-d7x97, resource: bindings, ignored listing per whitelist May 1 15:51:17.286: INFO: namespace e2e-tests-emptydir-d7x97 deletion completed in 6.519589177s • [SLOW TEST:10.815 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:51:17.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 1 15:51:17.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sgtmn' May 1 15:51:22.197: INFO: stderr: "" May 1 15:51:22.197: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 1 15:51:23.204: INFO: Selector matched 1 pods for map[app:redis] May 1 15:51:23.204: INFO: Found 0 / 1 May 1 15:51:24.202: INFO: Selector matched 1 pods for map[app:redis] May 1 15:51:24.202: INFO: Found 0 / 1 May 1 15:51:25.202: INFO: Selector matched 1 pods for map[app:redis] May 1 15:51:25.202: INFO: Found 0 / 1 May 1 15:51:26.201: INFO: Selector matched 1 pods for map[app:redis] May 1 15:51:26.201: INFO: Found 1 / 1 May 1 15:51:26.201: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 1 15:51:26.204: INFO: Selector matched 1 pods for map[app:redis] May 1 15:51:26.204: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 15:51:26.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-cn879 --namespace=e2e-tests-kubectl-sgtmn -p {"metadata":{"annotations":{"x":"y"}}}' May 1 15:51:26.303: INFO: stderr: "" May 1 15:51:26.304: INFO: stdout: "pod/redis-master-cn879 patched\n" STEP: checking annotations May 1 15:51:26.324: INFO: Selector matched 1 pods for map[app:redis] May 1 15:51:26.324: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:51:26.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sgtmn" for this suite. May 1 15:51:50.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:51:50.434: INFO: namespace: e2e-tests-kubectl-sgtmn, resource: bindings, ignored listing per whitelist May 1 15:51:50.454: INFO: namespace e2e-tests-kubectl-sgtmn deletion completed in 24.127074177s • [SLOW TEST:33.168 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:51:50.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:52:36.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-s8wpq" for this suite. May 1 15:52:44.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:52:44.798: INFO: namespace: e2e-tests-container-runtime-s8wpq, resource: bindings, ignored listing per whitelist May 1 15:52:44.814: INFO: namespace e2e-tests-container-runtime-s8wpq deletion completed in 8.444508793s • [SLOW TEST:54.360 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:52:44.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:52:45.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4m2n9" for this suite. May 1 15:53:07.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:53:07.681: INFO: namespace: e2e-tests-pods-4m2n9, resource: bindings, ignored listing per whitelist May 1 15:53:07.756: INFO: namespace e2e-tests-pods-4m2n9 deletion completed in 22.173457156s • [SLOW TEST:22.941 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:53:07.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d9df7791-8bc3-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 15:53:07.906: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d9e182b5-8bc3-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-67ctl" to be "success or failure" May 1 15:53:07.942: INFO: Pod "pod-projected-configmaps-d9e182b5-8bc3-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 36.663053ms May 1 15:53:10.073: INFO: Pod "pod-projected-configmaps-d9e182b5-8bc3-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167725943s May 1 15:53:12.078: INFO: Pod "pod-projected-configmaps-d9e182b5-8bc3-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171999584s STEP: Saw pod success May 1 15:53:12.078: INFO: Pod "pod-projected-configmaps-d9e182b5-8bc3-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:53:12.080: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-d9e182b5-8bc3-11ea-acf7-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 15:53:12.134: INFO: Waiting for pod pod-projected-configmaps-d9e182b5-8bc3-11ea-acf7-0242ac110017 to disappear May 1 15:53:12.147: INFO: Pod pod-projected-configmaps-d9e182b5-8bc3-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:53:12.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-67ctl" for this suite. May 1 15:53:18.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:53:18.172: INFO: namespace: e2e-tests-projected-67ctl, resource: bindings, ignored listing per whitelist May 1 15:53:18.257: INFO: namespace e2e-tests-projected-67ctl deletion completed in 6.106423145s • [SLOW TEST:10.501 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:53:18.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-hnpg2 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 15:53:18.371: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 15:53:48.544: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.50 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-hnpg2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:53:48.544: INFO: >>> kubeConfig: /root/.kube/config I0501 15:53:48.580061 6 log.go:172] (0xc0016984d0) (0xc001b84780) Create stream I0501 15:53:48.580089 6 log.go:172] (0xc0016984d0) (0xc001b84780) Stream added, broadcasting: 1 I0501 15:53:48.582587 6 log.go:172] (0xc0016984d0) Reply frame received for 1 I0501 15:53:48.582628 6 log.go:172] (0xc0016984d0) (0xc001f2a1e0) Create stream I0501 15:53:48.582643 6 log.go:172] (0xc0016984d0) (0xc001f2a1e0) Stream added, broadcasting: 3 I0501 15:53:48.583784 6 log.go:172] (0xc0016984d0) Reply frame received for 3 I0501 15:53:48.583836 6 log.go:172] (0xc0016984d0) (0xc001f2a280) Create stream I0501 15:53:48.583850 6 log.go:172] (0xc0016984d0) (0xc001f2a280) Stream added, broadcasting: 5 I0501 15:53:48.584826 6 log.go:172] (0xc0016984d0) Reply frame received for 5 I0501 15:53:50.000953 6 log.go:172] (0xc0016984d0) Data frame received for 3 I0501 15:53:50.000999 6 log.go:172] (0xc001f2a1e0) (3) Data frame handling I0501 15:53:50.001027 6 log.go:172] (0xc001f2a1e0) (3) Data frame sent I0501 15:53:50.001047 6 log.go:172] (0xc0016984d0) Data frame received for 3 I0501 15:53:50.001063 6 log.go:172] (0xc001f2a1e0) (3) Data frame handling I0501 15:53:50.001596 6 log.go:172] (0xc0016984d0) Data frame received for 5 I0501 15:53:50.001634 6 log.go:172] (0xc001f2a280) (5) Data frame handling I0501 15:53:50.007520 6 log.go:172] (0xc0016984d0) Data frame received for 1 I0501 15:53:50.007550 6 log.go:172] (0xc001b84780) (1) Data frame handling I0501 15:53:50.007573 6 log.go:172] (0xc001b84780) (1) Data frame sent I0501 15:53:50.007591 6 log.go:172] (0xc0016984d0) (0xc001b84780) Stream removed, broadcasting: 1 I0501 15:53:50.007618 6 log.go:172] (0xc0016984d0) Go away received I0501 15:53:50.007793 6 log.go:172] (0xc0016984d0) (0xc001b84780) Stream removed, broadcasting: 1 I0501 15:53:50.007822 6 log.go:172] (0xc0016984d0) (0xc001f2a1e0) Stream removed, broadcasting: 3 I0501 15:53:50.007833 6 log.go:172] (0xc0016984d0) (0xc001f2a280) Stream removed, broadcasting: 5 May 1 15:53:50.007: INFO: Found all expected endpoints: [netserver-0] May 1 15:53:50.030: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.44 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-hnpg2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:53:50.030: INFO: >>> kubeConfig: /root/.kube/config I0501 15:53:50.276039 6 log.go:172] (0xc000925970) (0xc0025ce280) Create stream I0501 15:53:50.276065 6 log.go:172] (0xc000925970) (0xc0025ce280) Stream added, broadcasting: 1 I0501 15:53:50.281704 6 log.go:172] (0xc000925970) Reply frame received for 1 I0501 15:53:50.281762 6 log.go:172] (0xc000925970) (0xc001f2a320) Create stream I0501 15:53:50.281776 6 log.go:172] (0xc000925970) (0xc001f2a320) Stream added, broadcasting: 3 I0501 15:53:50.283137 6 log.go:172] (0xc000925970) Reply frame received for 3 I0501 15:53:50.283167 6 log.go:172] (0xc000925970) (0xc0025ce320) Create stream I0501 15:53:50.283176 6 log.go:172] (0xc000925970) (0xc0025ce320) Stream added, broadcasting: 5 I0501 15:53:50.284212 6 log.go:172] (0xc000925970) Reply frame received for 5 I0501 15:53:51.352638 6 log.go:172] (0xc000925970) Data frame received for 3 I0501 15:53:51.352695 6 log.go:172] (0xc001f2a320) (3) Data frame handling I0501 15:53:51.352737 6 log.go:172] (0xc001f2a320) (3) Data frame sent I0501 15:53:51.352764 6 log.go:172] (0xc000925970) Data frame received for 3 I0501 15:53:51.352790 6 log.go:172] (0xc001f2a320) (3) Data frame handling I0501 15:53:51.352821 6 log.go:172] (0xc000925970) Data frame received for 5 I0501 15:53:51.352846 6 log.go:172] (0xc0025ce320) (5) Data frame handling I0501 15:53:51.354650 6 log.go:172] (0xc000925970) Data frame received for 1 I0501 15:53:51.354728 6 log.go:172] (0xc0025ce280) (1) Data frame handling I0501 15:53:51.354773 6 log.go:172] (0xc0025ce280) (1) Data frame sent I0501 15:53:51.354808 6 log.go:172] (0xc000925970) (0xc0025ce280) Stream removed, broadcasting: 1 I0501 15:53:51.354846 6 log.go:172] (0xc000925970) Go away received I0501 15:53:51.354976 6 log.go:172] (0xc000925970) (0xc0025ce280) Stream removed, broadcasting: 1 I0501 15:53:51.355023 6 log.go:172] (0xc000925970) (0xc001f2a320) Stream removed, broadcasting: 3 I0501 15:53:51.355048 6 log.go:172] (0xc000925970) (0xc0025ce320) Stream removed, broadcasting: 5 May 1 15:53:51.355: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:53:51.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-hnpg2" for this suite. May 1 15:54:18.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:54:18.263: INFO: namespace: e2e-tests-pod-network-test-hnpg2, resource: bindings, ignored listing per whitelist May 1 15:54:18.268: INFO: namespace e2e-tests-pod-network-test-hnpg2 deletion completed in 26.909471569s • [SLOW TEST:60.011 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:54:18.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0460bd20-8bc4-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 15:54:19.862: INFO: Waiting up to 5m0s for pod "pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017" in namespace "e2e-tests-secrets-zzp8g" to be "success or failure" May 1 15:54:20.215: INFO: Pod "pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 352.971382ms May 1 15:54:22.219: INFO: Pod "pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.356972242s May 1 15:54:24.392: INFO: Pod "pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.529872643s May 1 15:54:26.398: INFO: Pod "pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.535471277s May 1 15:54:28.402: INFO: Pod "pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.539888961s STEP: Saw pod success May 1 15:54:28.402: INFO: Pod "pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:54:28.406: INFO: Trying to get logs from node hunter-worker pod pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 15:54:28.501: INFO: Waiting for pod pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017 to disappear May 1 15:54:28.550: INFO: Pod pod-secrets-04aad9e2-8bc4-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:54:28.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zzp8g" for this suite. May 1 15:54:38.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:54:38.936: INFO: namespace: e2e-tests-secrets-zzp8g, resource: bindings, ignored listing per whitelist May 1 15:54:38.979: INFO: namespace e2e-tests-secrets-zzp8g deletion completed in 10.426221388s • [SLOW TEST:20.710 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:54:38.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 15:54:39.175: INFO: Creating deployment "test-recreate-deployment" May 1 15:54:39.186: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 1 15:54:40.152: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 1 15:54:43.126: INFO: Waiting deployment "test-recreate-deployment" to complete May 1 15:54:43.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945281, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945281, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945282, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945280, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:54:45.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945281, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945281, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945282, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945280, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:54:47.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945281, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945281, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945282, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945280, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:54:49.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945281, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945281, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945282, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945280, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:54:51.343: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 1 15:54:51.348: INFO: Updating deployment test-recreate-deployment May 1 15:54:51.348: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 15:54:54.771: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-xmf92,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xmf92/deployments/test-recreate-deployment,UID:105037a7-8bc4-11ea-99e8-0242ac110002,ResourceVersion:8197615,Generation:2,CreationTimestamp:2020-05-01 15:54:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-01 15:54:54 +0000 UTC 2020-05-01 15:54:54 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-01 15:54:54 +0000 UTC 2020-05-01 15:54:40 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 1 15:54:55.097: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-xmf92,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xmf92/replicasets/test-recreate-deployment-589c4bfd,UID:18a8b934-8bc4-11ea-99e8-0242ac110002,ResourceVersion:8197614,Generation:1,CreationTimestamp:2020-05-01 15:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 105037a7-8bc4-11ea-99e8-0242ac110002 0xc0023ed06f 0xc0023ed080}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 15:54:55.097: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 1 15:54:55.097: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-xmf92,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xmf92/replicasets/test-recreate-deployment-5bf7f65dc,UID:10e57791-8bc4-11ea-99e8-0242ac110002,ResourceVersion:8197604,Generation:2,CreationTimestamp:2020-05-01 15:54:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 105037a7-8bc4-11ea-99e8-0242ac110002 0xc0023ed140 0xc0023ed141}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 15:54:55.408: INFO: Pod "test-recreate-deployment-589c4bfd-ss798" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-ss798,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-xmf92,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xmf92/pods/test-recreate-deployment-589c4bfd-ss798,UID:18f4af0a-8bc4-11ea-99e8-0242ac110002,ResourceVersion:8197620,Generation:0,CreationTimestamp:2020-05-01 15:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 18a8b934-8bc4-11ea-99e8-0242ac110002 0xc0022eb9ef 0xc0022eba00}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jjzg5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjzg5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjzg5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022eba70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022eba90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:54:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:54:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:54:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:54:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 15:54:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:54:55.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-xmf92" for this suite. May 1 15:55:01.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:55:02.630: INFO: namespace: e2e-tests-deployment-xmf92, resource: bindings, ignored listing per whitelist May 1 15:55:02.654: INFO: namespace e2e-tests-deployment-xmf92 deletion completed in 7.242744637s • [SLOW TEST:23.675 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:55:02.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-bxpnt May 1 15:55:09.565: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-bxpnt STEP: checking the pod's current state and verifying that restartCount is present May 1 15:55:09.568: INFO: Initial restart count of pod liveness-exec is 0 May 1 15:56:02.704: INFO: Restart count of pod e2e-tests-container-probe-bxpnt/liveness-exec is now 1 (53.136116953s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:56:02.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bxpnt" for this suite. May 1 15:56:13.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:56:13.277: INFO: namespace: e2e-tests-container-probe-bxpnt, resource: bindings, ignored listing per whitelist May 1 15:56:13.308: INFO: namespace e2e-tests-container-probe-bxpnt deletion completed in 10.237056086s • [SLOW TEST:70.654 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:56:13.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 1 15:56:13.652: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-rrqmm" to be "success or failure" May 1 15:56:13.718: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 65.993108ms May 1 15:56:15.981: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329412688s May 1 15:56:18.202: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.550738477s May 1 15:56:20.207: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.554962565s May 1 15:56:22.211: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559132181s May 1 15:56:24.214: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 10.562886756s May 1 15:56:26.219: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.567144373s STEP: Saw pod success May 1 15:56:26.219: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 1 15:56:26.221: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 1 15:56:26.267: INFO: Waiting for pod pod-host-path-test to disappear May 1 15:56:26.424: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:56:26.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-rrqmm" for this suite. May 1 15:56:32.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:56:32.897: INFO: namespace: e2e-tests-hostpath-rrqmm, resource: bindings, ignored listing per whitelist May 1 15:56:32.918: INFO: namespace e2e-tests-hostpath-rrqmm deletion completed in 6.490638938s • [SLOW TEST:19.610 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:56:32.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-542d7cdb-8bc4-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 15:56:33.053: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-542f6061-8bc4-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-x455j" to be "success or failure" May 1 15:56:33.076: INFO: Pod "pod-projected-secrets-542f6061-8bc4-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.646865ms May 1 15:56:35.262: INFO: Pod "pod-projected-secrets-542f6061-8bc4-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209029715s May 1 15:56:37.266: INFO: Pod "pod-projected-secrets-542f6061-8bc4-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.213237773s STEP: Saw pod success May 1 15:56:37.266: INFO: Pod "pod-projected-secrets-542f6061-8bc4-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:56:37.269: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-542f6061-8bc4-11ea-acf7-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 15:56:37.359: INFO: Waiting for pod pod-projected-secrets-542f6061-8bc4-11ea-acf7-0242ac110017 to disappear May 1 15:56:37.525: INFO: Pod pod-projected-secrets-542f6061-8bc4-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:56:37.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x455j" for this suite. May 1 15:56:43.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:56:43.619: INFO: namespace: e2e-tests-projected-x455j, resource: bindings, ignored listing per whitelist May 1 15:56:43.628: INFO: namespace e2e-tests-projected-x455j deletion completed in 6.09885595s • [SLOW TEST:10.710 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:56:43.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 15:56:44.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kv8g7' May 1 15:56:44.373: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 15:56:44.373: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 1 15:56:46.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-kv8g7' May 1 15:56:47.142: INFO: stderr: "" May 1 15:56:47.142: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:56:47.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kv8g7" for this suite. May 1 15:56:53.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:56:53.603: INFO: namespace: e2e-tests-kubectl-kv8g7, resource: bindings, ignored listing per whitelist May 1 15:56:53.630: INFO: namespace e2e-tests-kubectl-kv8g7 deletion completed in 6.259633021s • [SLOW TEST:10.002 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:56:53.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 15:56:54.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-tbcch" to be "success or failure" May 1 15:56:54.328: INFO: Pod "downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 220.26417ms May 1 15:56:56.332: INFO: Pod "downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224111654s May 1 15:56:58.557: INFO: Pod "downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449236s May 1 15:57:00.564: INFO: Pod "downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.456642172s May 1 15:57:02.568: INFO: Pod "downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.459942669s STEP: Saw pod success May 1 15:57:02.568: INFO: Pod "downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 15:57:02.570: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 15:57:02.636: INFO: Waiting for pod downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017 to disappear May 1 15:57:02.675: INFO: Pod downwardapi-volume-60bc4856-8bc4-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:57:02.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tbcch" for this suite. May 1 15:57:14.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:57:15.140: INFO: namespace: e2e-tests-projected-tbcch, resource: bindings, ignored listing per whitelist May 1 15:57:15.858: INFO: namespace e2e-tests-projected-tbcch deletion completed in 13.180475373s • [SLOW TEST:22.228 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:57:15.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-rjbl STEP: Creating a pod to test atomic-volume-subpath May 1 15:57:16.604: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rjbl" in namespace "e2e-tests-subpath-6nr7m" to be "success or failure" May 1 15:57:16.759: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Pending", Reason="", readiness=false. Elapsed: 155.100966ms May 1 15:57:18.837: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233034134s May 1 15:57:20.891: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286916529s May 1 15:57:22.963: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359090488s May 1 15:57:24.968: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 8.363629711s May 1 15:57:26.973: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 10.368269859s May 1 15:57:28.977: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 12.372773182s May 1 15:57:30.982: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 14.377521322s May 1 15:57:32.986: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 16.381938045s May 1 15:57:34.993: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 18.388473491s May 1 15:57:36.997: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 20.393027696s May 1 15:57:39.001: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 22.396555819s May 1 15:57:41.005: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 24.4002565s May 1 15:57:43.008: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Running", Reason="", readiness=false. Elapsed: 26.403968327s May 1 15:57:45.012: INFO: Pod "pod-subpath-test-configmap-rjbl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.407652075s STEP: Saw pod success May 1 15:57:45.012: INFO: Pod "pod-subpath-test-configmap-rjbl" satisfied condition "success or failure" May 1 15:57:45.015: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-rjbl container test-container-subpath-configmap-rjbl: STEP: delete the pod May 1 15:57:45.068: INFO: Waiting for pod pod-subpath-test-configmap-rjbl to disappear May 1 15:57:45.082: INFO: Pod pod-subpath-test-configmap-rjbl no longer exists STEP: Deleting pod pod-subpath-test-configmap-rjbl May 1 15:57:45.082: INFO: Deleting pod "pod-subpath-test-configmap-rjbl" in namespace "e2e-tests-subpath-6nr7m" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:57:45.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6nr7m" for this suite. May 1 15:57:51.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:57:51.185: INFO: namespace: e2e-tests-subpath-6nr7m, resource: bindings, ignored listing per whitelist May 1 15:57:51.185: INFO: namespace e2e-tests-subpath-6nr7m deletion completed in 6.096935928s • [SLOW TEST:35.326 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:57:51.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-97ddj STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-97ddj STEP: Deleting pre-stop pod May 1 15:58:10.039: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 15:58:10.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-97ddj" for this suite. May 1 15:58:48.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:58:48.151: INFO: namespace: e2e-tests-prestop-97ddj, resource: bindings, ignored listing per whitelist May 1 15:58:48.168: INFO: namespace e2e-tests-prestop-97ddj deletion completed in 38.101306056s • [SLOW TEST:56.983 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 15:58:48.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vwv4x [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 1 15:58:48.358: INFO: Found 0 stateful pods, waiting for 3 May 1 15:58:58.363: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 15:58:58.363: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 15:58:58.363: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 1 15:59:08.363: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 15:59:08.364: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 15:59:08.364: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 1 15:59:08.394: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 1 15:59:18.727: INFO: Updating stateful set ss2 May 1 15:59:18.783: INFO: Waiting for Pod e2e-tests-statefulset-vwv4x/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 1 15:59:29.043: INFO: Found 2 stateful pods, waiting for 3 May 1 15:59:39.280: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 15:59:39.280: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 15:59:39.280: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 1 15:59:39.303: INFO: Updating stateful set ss2 May 1 15:59:39.511: INFO: Waiting for Pod e2e-tests-statefulset-vwv4x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 15:59:49.517: INFO: Waiting for Pod e2e-tests-statefulset-vwv4x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 15:59:59.534: INFO: Updating stateful set ss2 May 1 15:59:59.555: INFO: Waiting for StatefulSet e2e-tests-statefulset-vwv4x/ss2 to complete update May 1 15:59:59.555: INFO: Waiting for Pod e2e-tests-statefulset-vwv4x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 16:00:09.679: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vwv4x May 1 16:00:09.681: INFO: Scaling statefulset ss2 to 0 May 1 16:00:29.833: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:00:29.836: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:00:29.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vwv4x" for this suite. May 1 16:00:37.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:00:38.054: INFO: namespace: e2e-tests-statefulset-vwv4x, resource: bindings, ignored listing per whitelist May 1 16:00:38.060: INFO: namespace e2e-tests-statefulset-vwv4x deletion completed in 8.092199613s • [SLOW TEST:109.891 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:00:38.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:00:44.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-6hj4s" for this suite. May 1 16:00:50.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:00:50.748: INFO: namespace: e2e-tests-namespaces-6hj4s, resource: bindings, ignored listing per whitelist May 1 16:00:50.752: INFO: namespace e2e-tests-namespaces-6hj4s deletion completed in 6.095104077s STEP: Destroying namespace "e2e-tests-nsdeletetest-xsm4l" for this suite. May 1 16:00:50.754: INFO: Namespace e2e-tests-nsdeletetest-xsm4l was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-ffft4" for this suite. May 1 16:00:56.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:00:56.834: INFO: namespace: e2e-tests-nsdeletetest-ffft4, resource: bindings, ignored listing per whitelist May 1 16:00:56.854: INFO: namespace e2e-tests-nsdeletetest-ffft4 deletion completed in 6.09980459s • [SLOW TEST:18.794 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:00:56.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-f1e001ce-8bc4-11ea-acf7-0242ac110017 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-f1e001ce-8bc4-11ea-acf7-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:01:04.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sdbk6" for this suite. May 1 16:01:28.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:01:28.225: INFO: namespace: e2e-tests-projected-sdbk6, resource: bindings, ignored listing per whitelist May 1 16:01:28.281: INFO: namespace e2e-tests-projected-sdbk6 deletion completed in 24.231263408s • [SLOW TEST:31.427 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:01:28.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-046a55bd-8bc5-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 16:01:28.814: INFO: Waiting up to 5m0s for pod "pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017" in namespace "e2e-tests-configmap-grn5l" to be "success or failure" May 1 16:01:29.087: INFO: Pod "pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 273.152861ms May 1 16:01:31.128: INFO: Pod "pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314398377s May 1 16:01:33.290: INFO: Pod "pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476360463s May 1 16:01:35.350: INFO: Pod "pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.536507001s May 1 16:01:37.354: INFO: Pod "pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.539919125s STEP: Saw pod success May 1 16:01:37.354: INFO: Pod "pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:01:37.356: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 16:01:37.479: INFO: Waiting for pod pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017 to disappear May 1 16:01:37.601: INFO: Pod pod-configmaps-046eb529-8bc5-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:01:37.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-grn5l" for this suite. May 1 16:01:45.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:01:46.162: INFO: namespace: e2e-tests-configmap-grn5l, resource: bindings, ignored listing per whitelist May 1 16:01:46.200: INFO: namespace e2e-tests-configmap-grn5l deletion completed in 8.595692997s • [SLOW TEST:17.919 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:01:46.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-0f19e6d1-8bc5-11ea-acf7-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-0f19e734-8bc5-11ea-acf7-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0f19e6d1-8bc5-11ea-acf7-0242ac110017 STEP: Updating configmap cm-test-opt-upd-0f19e734-8bc5-11ea-acf7-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-0f19e760-8bc5-11ea-acf7-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:03:19.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-p27p6" for this suite. May 1 16:03:45.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:03:45.098: INFO: namespace: e2e-tests-configmap-p27p6, resource: bindings, ignored listing per whitelist May 1 16:03:45.105: INFO: namespace e2e-tests-configmap-p27p6 deletion completed in 26.100595026s • [SLOW TEST:118.904 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:03:45.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:03:52.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-m49c4" for this suite. May 1 16:04:16.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:04:16.405: INFO: namespace: e2e-tests-replication-controller-m49c4, resource: bindings, ignored listing per whitelist May 1 16:04:16.423: INFO: namespace e2e-tests-replication-controller-m49c4 deletion completed in 24.125558346s • [SLOW TEST:31.318 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:04:16.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 1 16:04:16.595: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-sxn5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxn5r/configmaps/e2e-watch-test-resource-version,UID:68754b73-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8199357,Generation:0,CreationTimestamp:2020-05-01 16:04:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 16:04:16.596: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-sxn5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxn5r/configmaps/e2e-watch-test-resource-version,UID:68754b73-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8199358,Generation:0,CreationTimestamp:2020-05-01 16:04:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:04:16.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-sxn5r" for this suite. May 1 16:04:22.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:04:22.680: INFO: namespace: e2e-tests-watch-sxn5r, resource: bindings, ignored listing per whitelist May 1 16:04:22.705: INFO: namespace e2e-tests-watch-sxn5r deletion completed in 6.106002723s • [SLOW TEST:6.282 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:04:22.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 16:04:22.911: INFO: Waiting up to 5m0s for pod "downward-api-6c3e113b-8bc5-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-gcbjw" to be "success or failure" May 1 16:04:23.120: INFO: Pod "downward-api-6c3e113b-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 208.952021ms May 1 16:04:25.124: INFO: Pod "downward-api-6c3e113b-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21338718s May 1 16:04:27.164: INFO: Pod "downward-api-6c3e113b-8bc5-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.253445394s May 1 16:04:29.424: INFO: Pod "downward-api-6c3e113b-8bc5-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.513571966s STEP: Saw pod success May 1 16:04:29.424: INFO: Pod "downward-api-6c3e113b-8bc5-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:04:29.427: INFO: Trying to get logs from node hunter-worker pod downward-api-6c3e113b-8bc5-11ea-acf7-0242ac110017 container dapi-container: STEP: delete the pod May 1 16:04:29.570: INFO: Waiting for pod downward-api-6c3e113b-8bc5-11ea-acf7-0242ac110017 to disappear May 1 16:04:29.625: INFO: Pod downward-api-6c3e113b-8bc5-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:04:29.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gcbjw" for this suite. May 1 16:04:35.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:04:35.885: INFO: namespace: e2e-tests-downward-api-gcbjw, resource: bindings, ignored listing per whitelist May 1 16:04:35.885: INFO: namespace e2e-tests-downward-api-gcbjw deletion completed in 6.256457181s • [SLOW TEST:13.179 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:04:35.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 1 16:04:42.206: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-741e25a5-8bc5-11ea-acf7-0242ac110017", GenerateName:"", Namespace:"e2e-tests-pods-9pt9k", SelfLink:"/api/v1/namespaces/e2e-tests-pods-9pt9k/pods/pod-submit-remove-741e25a5-8bc5-11ea-acf7-0242ac110017", UID:"74203c12-8bc5-11ea-99e8-0242ac110002", ResourceVersion:"8199447", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723945876, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"117249107"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5cl6s", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024ec400), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5cl6s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00275efd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00215cae0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00275f020)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00275f040)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00275f048), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00275f04c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945876, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945880, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945880, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945876, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.58", StartTime:(*v1.Time)(0xc00272a8e0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00272a900), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://51cfe1793f1c07c3a2ab205202f6bb3ac169be3f5f8809962467cd87ab5a4cbd"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 1 16:04:47.285: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:04:47.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9pt9k" for this suite. May 1 16:04:53.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:04:53.463: INFO: namespace: e2e-tests-pods-9pt9k, resource: bindings, ignored listing per whitelist May 1 16:04:53.484: INFO: namespace e2e-tests-pods-9pt9k deletion completed in 6.12734301s • [SLOW TEST:17.598 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:04:53.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-rqzpf I0501 16:04:53.610653 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-rqzpf, replica count: 1 I0501 16:04:54.661910 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:04:55.662176 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:04:56.662386 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:04:57.662610 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 16:04:57.815: INFO: Created: latency-svc-pd6gn May 1 16:04:57.909: INFO: Got endpoints: latency-svc-pd6gn [146.879858ms] May 1 16:04:57.999: INFO: Created: latency-svc-k2c6x May 1 16:04:58.047: INFO: Got endpoints: latency-svc-k2c6x [137.758759ms] May 1 16:04:58.056: INFO: Created: latency-svc-vgxwh May 1 16:04:58.073: INFO: Got endpoints: latency-svc-vgxwh [164.086489ms] May 1 16:04:58.110: INFO: Created: latency-svc-xmn2m May 1 16:04:58.123: INFO: Got endpoints: latency-svc-xmn2m [213.703796ms] May 1 16:04:58.146: INFO: Created: latency-svc-hb2hz May 1 16:04:58.203: INFO: Got endpoints: latency-svc-hb2hz [293.839591ms] May 1 16:04:58.256: INFO: Created: latency-svc-27j8t May 1 16:04:58.272: INFO: Got endpoints: latency-svc-27j8t [362.5812ms] May 1 16:04:58.605: INFO: Created: latency-svc-8mznz May 1 16:04:58.610: INFO: Got endpoints: latency-svc-8mznz [700.154662ms] May 1 16:04:58.767: INFO: Created: latency-svc-4hsrm May 1 16:04:58.784: INFO: Got endpoints: latency-svc-4hsrm [874.653927ms] May 1 16:04:58.946: INFO: Created: latency-svc-sxtz5 May 1 16:04:58.950: INFO: Got endpoints: latency-svc-sxtz5 [1.040929878s] May 1 16:04:59.157: INFO: Created: latency-svc-hgp9w May 1 16:04:59.227: INFO: Got endpoints: latency-svc-hgp9w [1.317609036s] May 1 16:04:59.317: INFO: Created: latency-svc-k9sj7 May 1 16:04:59.327: INFO: Got endpoints: latency-svc-k9sj7 [1.417131294s] May 1 16:04:59.385: INFO: Created: latency-svc-rc4vt May 1 16:04:59.413: INFO: Got endpoints: latency-svc-rc4vt [1.503709798s] May 1 16:04:59.461: INFO: Created: latency-svc-cnhxh May 1 16:04:59.479: INFO: Got endpoints: latency-svc-cnhxh [1.569752019s] May 1 16:04:59.505: INFO: Created: latency-svc-s8bwt May 1 16:04:59.522: INFO: Got endpoints: latency-svc-s8bwt [1.612631771s] May 1 16:04:59.610: INFO: Created: latency-svc-jcngm May 1 16:04:59.631: INFO: Got endpoints: latency-svc-jcngm [1.72156109s] May 1 16:04:59.662: INFO: Created: latency-svc-nqxbc May 1 16:04:59.692: INFO: Got endpoints: latency-svc-nqxbc [1.781973052s] May 1 16:04:59.749: INFO: Created: latency-svc-fwqks May 1 16:04:59.751: INFO: Got endpoints: latency-svc-fwqks [1.704145322s] May 1 16:04:59.780: INFO: Created: latency-svc-z8h8l May 1 16:04:59.791: INFO: Got endpoints: latency-svc-z8h8l [1.71745025s] May 1 16:04:59.818: INFO: Created: latency-svc-rxkhb May 1 16:04:59.842: INFO: Got endpoints: latency-svc-rxkhb [1.718993133s] May 1 16:04:59.916: INFO: Created: latency-svc-5zbj5 May 1 16:04:59.919: INFO: Got endpoints: latency-svc-5zbj5 [1.716108872s] May 1 16:04:59.960: INFO: Created: latency-svc-x5zjd May 1 16:05:00.002: INFO: Got endpoints: latency-svc-x5zjd [1.730044457s] May 1 16:05:00.078: INFO: Created: latency-svc-k4r4v May 1 16:05:00.104: INFO: Got endpoints: latency-svc-k4r4v [1.493956938s] May 1 16:05:00.164: INFO: Created: latency-svc-m44kd May 1 16:05:00.257: INFO: Got endpoints: latency-svc-m44kd [1.472834938s] May 1 16:05:00.259: INFO: Created: latency-svc-k4fpq May 1 16:05:00.284: INFO: Got endpoints: latency-svc-k4fpq [1.333908298s] May 1 16:05:00.357: INFO: Created: latency-svc-dc68p May 1 16:05:00.467: INFO: Got endpoints: latency-svc-dc68p [1.239479507s] May 1 16:05:00.478: INFO: Created: latency-svc-zfbgp May 1 16:05:00.506: INFO: Got endpoints: latency-svc-zfbgp [1.179490907s] May 1 16:05:00.542: INFO: Created: latency-svc-x8n6w May 1 16:05:00.568: INFO: Got endpoints: latency-svc-x8n6w [1.15425655s] May 1 16:05:00.627: INFO: Created: latency-svc-kghgp May 1 16:05:00.645: INFO: Got endpoints: latency-svc-kghgp [1.16597521s] May 1 16:05:00.671: INFO: Created: latency-svc-vvpdk May 1 16:05:00.688: INFO: Got endpoints: latency-svc-vvpdk [119.965775ms] May 1 16:05:00.797: INFO: Created: latency-svc-ln5q6 May 1 16:05:00.892: INFO: Got endpoints: latency-svc-ln5q6 [1.36962506s] May 1 16:05:01.290: INFO: Created: latency-svc-g2xmj May 1 16:05:01.335: INFO: Got endpoints: latency-svc-g2xmj [1.703443804s] May 1 16:05:01.510: INFO: Created: latency-svc-5tsfk May 1 16:05:01.539: INFO: Got endpoints: latency-svc-5tsfk [1.847046566s] May 1 16:05:01.783: INFO: Created: latency-svc-fsbpg May 1 16:05:01.838: INFO: Got endpoints: latency-svc-fsbpg [2.08699047s] May 1 16:05:01.922: INFO: Created: latency-svc-vfd74 May 1 16:05:01.941: INFO: Got endpoints: latency-svc-vfd74 [2.149839029s] May 1 16:05:02.085: INFO: Created: latency-svc-6bcwb May 1 16:05:02.088: INFO: Got endpoints: latency-svc-6bcwb [2.245318076s] May 1 16:05:02.176: INFO: Created: latency-svc-n69lm May 1 16:05:02.256: INFO: Got endpoints: latency-svc-n69lm [2.337002905s] May 1 16:05:02.273: INFO: Created: latency-svc-74fbb May 1 16:05:02.288: INFO: Got endpoints: latency-svc-74fbb [2.286166981s] May 1 16:05:02.329: INFO: Created: latency-svc-dvlc6 May 1 16:05:02.343: INFO: Got endpoints: latency-svc-dvlc6 [2.23902689s] May 1 16:05:02.413: INFO: Created: latency-svc-4cdf8 May 1 16:05:02.416: INFO: Got endpoints: latency-svc-4cdf8 [2.159213514s] May 1 16:05:02.476: INFO: Created: latency-svc-8g542 May 1 16:05:02.694: INFO: Got endpoints: latency-svc-8g542 [2.409745466s] May 1 16:05:02.698: INFO: Created: latency-svc-qnjjd May 1 16:05:02.728: INFO: Got endpoints: latency-svc-qnjjd [2.26134207s] May 1 16:05:02.940: INFO: Created: latency-svc-jjfkx May 1 16:05:02.944: INFO: Got endpoints: latency-svc-jjfkx [2.437548351s] May 1 16:05:03.030: INFO: Created: latency-svc-2zm2h May 1 16:05:03.101: INFO: Got endpoints: latency-svc-2zm2h [2.455912233s] May 1 16:05:03.106: INFO: Created: latency-svc-zbmbx May 1 16:05:03.160: INFO: Got endpoints: latency-svc-zbmbx [2.471879052s] May 1 16:05:03.448: INFO: Created: latency-svc-n7crr May 1 16:05:03.532: INFO: Got endpoints: latency-svc-n7crr [2.640025836s] May 1 16:05:03.695: INFO: Created: latency-svc-vrnkk May 1 16:05:03.717: INFO: Got endpoints: latency-svc-vrnkk [2.382636002s] May 1 16:05:03.892: INFO: Created: latency-svc-ng9d7 May 1 16:05:03.896: INFO: Got endpoints: latency-svc-ng9d7 [2.357403482s] May 1 16:05:04.216: INFO: Created: latency-svc-z4dcv May 1 16:05:04.221: INFO: Got endpoints: latency-svc-z4dcv [2.382677675s] May 1 16:05:04.454: INFO: Created: latency-svc-2bnzb May 1 16:05:04.492: INFO: Got endpoints: latency-svc-2bnzb [2.550738663s] May 1 16:05:04.682: INFO: Created: latency-svc-5nqdr May 1 16:05:04.862: INFO: Got endpoints: latency-svc-5nqdr [2.773912714s] May 1 16:05:04.891: INFO: Created: latency-svc-mmt6b May 1 16:05:04.911: INFO: Got endpoints: latency-svc-mmt6b [2.654375451s] May 1 16:05:05.130: INFO: Created: latency-svc-r24t4 May 1 16:05:05.281: INFO: Got endpoints: latency-svc-r24t4 [2.992878076s] May 1 16:05:05.353: INFO: Created: latency-svc-4m5cn May 1 16:05:05.466: INFO: Got endpoints: latency-svc-4m5cn [3.123479292s] May 1 16:05:05.523: INFO: Created: latency-svc-5pkpd May 1 16:05:05.552: INFO: Got endpoints: latency-svc-5pkpd [3.135825222s] May 1 16:05:05.632: INFO: Created: latency-svc-hrnhf May 1 16:05:05.636: INFO: Got endpoints: latency-svc-hrnhf [2.941931011s] May 1 16:05:05.658: INFO: Created: latency-svc-rg2f8 May 1 16:05:05.667: INFO: Got endpoints: latency-svc-rg2f8 [2.938356725s] May 1 16:05:05.692: INFO: Created: latency-svc-87nwr May 1 16:05:05.697: INFO: Got endpoints: latency-svc-87nwr [2.753068537s] May 1 16:05:05.784: INFO: Created: latency-svc-58rrh May 1 16:05:05.787: INFO: Got endpoints: latency-svc-58rrh [2.685998772s] May 1 16:05:05.814: INFO: Created: latency-svc-9ljwf May 1 16:05:05.823: INFO: Got endpoints: latency-svc-9ljwf [2.663721508s] May 1 16:05:05.863: INFO: Created: latency-svc-r8r8k May 1 16:05:05.942: INFO: Got endpoints: latency-svc-r8r8k [2.41045118s] May 1 16:05:05.958: INFO: Created: latency-svc-qdscb May 1 16:05:05.993: INFO: Got endpoints: latency-svc-qdscb [2.274979162s] May 1 16:05:06.037: INFO: Created: latency-svc-zck24 May 1 16:05:06.083: INFO: Got endpoints: latency-svc-zck24 [2.186618165s] May 1 16:05:06.181: INFO: Created: latency-svc-h666b May 1 16:05:06.275: INFO: Got endpoints: latency-svc-h666b [2.053522416s] May 1 16:05:06.339: INFO: Created: latency-svc-8p6hv May 1 16:05:06.358: INFO: Got endpoints: latency-svc-8p6hv [1.86645708s] May 1 16:05:06.826: INFO: Created: latency-svc-kvqwb May 1 16:05:06.970: INFO: Got endpoints: latency-svc-kvqwb [2.108537217s] May 1 16:05:06.986: INFO: Created: latency-svc-b8ds2 May 1 16:05:07.018: INFO: Got endpoints: latency-svc-b8ds2 [2.10714403s] May 1 16:05:07.256: INFO: Created: latency-svc-vmwn6 May 1 16:05:07.417: INFO: Got endpoints: latency-svc-vmwn6 [2.135758623s] May 1 16:05:07.653: INFO: Created: latency-svc-bjc6c May 1 16:05:07.682: INFO: Got endpoints: latency-svc-bjc6c [2.215657733s] May 1 16:05:07.732: INFO: Created: latency-svc-plsqw May 1 16:05:07.820: INFO: Got endpoints: latency-svc-plsqw [2.267784606s] May 1 16:05:07.859: INFO: Created: latency-svc-bdplb May 1 16:05:08.017: INFO: Got endpoints: latency-svc-bdplb [2.381103984s] May 1 16:05:08.033: INFO: Created: latency-svc-h2bv8 May 1 16:05:08.540: INFO: Created: latency-svc-7dqt6 May 1 16:05:08.543: INFO: Got endpoints: latency-svc-h2bv8 [2.876036298s] May 1 16:05:08.607: INFO: Got endpoints: latency-svc-7dqt6 [2.910289461s] May 1 16:05:08.818: INFO: Created: latency-svc-42nx2 May 1 16:05:08.859: INFO: Got endpoints: latency-svc-42nx2 [3.071389373s] May 1 16:05:09.069: INFO: Created: latency-svc-9z277 May 1 16:05:09.450: INFO: Got endpoints: latency-svc-9z277 [3.626512775s] May 1 16:05:09.452: INFO: Created: latency-svc-w6r68 May 1 16:05:09.458: INFO: Got endpoints: latency-svc-w6r68 [3.515628514s] May 1 16:05:09.593: INFO: Created: latency-svc-xbbfg May 1 16:05:09.603: INFO: Got endpoints: latency-svc-xbbfg [3.609953616s] May 1 16:05:09.630: INFO: Created: latency-svc-lbzjd May 1 16:05:09.639: INFO: Got endpoints: latency-svc-lbzjd [3.555607751s] May 1 16:05:09.798: INFO: Created: latency-svc-llk2t May 1 16:05:09.800: INFO: Got endpoints: latency-svc-llk2t [3.525695428s] May 1 16:05:09.867: INFO: Created: latency-svc-6x8vt May 1 16:05:09.879: INFO: Got endpoints: latency-svc-6x8vt [3.52045944s] May 1 16:05:09.958: INFO: Created: latency-svc-467dd May 1 16:05:09.960: INFO: Got endpoints: latency-svc-467dd [2.989937826s] May 1 16:05:10.050: INFO: Created: latency-svc-jhltb May 1 16:05:10.125: INFO: Got endpoints: latency-svc-jhltb [3.107077619s] May 1 16:05:10.132: INFO: Created: latency-svc-fvts6 May 1 16:05:10.150: INFO: Got endpoints: latency-svc-fvts6 [2.732673648s] May 1 16:05:10.183: INFO: Created: latency-svc-w2cs7 May 1 16:05:10.198: INFO: Got endpoints: latency-svc-w2cs7 [2.515680289s] May 1 16:05:10.276: INFO: Created: latency-svc-kcj45 May 1 16:05:10.278: INFO: Got endpoints: latency-svc-kcj45 [2.458088646s] May 1 16:05:10.361: INFO: Created: latency-svc-xzhmm May 1 16:05:10.372: INFO: Got endpoints: latency-svc-xzhmm [2.354458616s] May 1 16:05:10.425: INFO: Created: latency-svc-bdhk7 May 1 16:05:10.428: INFO: Got endpoints: latency-svc-bdhk7 [1.884910879s] May 1 16:05:10.456: INFO: Created: latency-svc-f62p8 May 1 16:05:10.474: INFO: Got endpoints: latency-svc-f62p8 [1.866761577s] May 1 16:05:10.500: INFO: Created: latency-svc-lp9dg May 1 16:05:10.517: INFO: Got endpoints: latency-svc-lp9dg [1.658088498s] May 1 16:05:10.592: INFO: Created: latency-svc-ppsk4 May 1 16:05:10.595: INFO: Got endpoints: latency-svc-ppsk4 [1.145358828s] May 1 16:05:10.624: INFO: Created: latency-svc-p9s8x May 1 16:05:10.637: INFO: Got endpoints: latency-svc-p9s8x [1.17916222s] May 1 16:05:10.660: INFO: Created: latency-svc-5jrn4 May 1 16:05:10.674: INFO: Got endpoints: latency-svc-5jrn4 [1.071119097s] May 1 16:05:10.749: INFO: Created: latency-svc-q4hrc May 1 16:05:10.752: INFO: Got endpoints: latency-svc-q4hrc [1.112977836s] May 1 16:05:10.776: INFO: Created: latency-svc-f6qx7 May 1 16:05:10.794: INFO: Got endpoints: latency-svc-f6qx7 [993.660218ms] May 1 16:05:10.812: INFO: Created: latency-svc-j486d May 1 16:05:10.830: INFO: Got endpoints: latency-svc-j486d [951.522317ms] May 1 16:05:10.916: INFO: Created: latency-svc-wfm4z May 1 16:05:10.918: INFO: Got endpoints: latency-svc-wfm4z [958.02165ms] May 1 16:05:11.091: INFO: Created: latency-svc-gbx79 May 1 16:05:11.094: INFO: Got endpoints: latency-svc-gbx79 [969.038849ms] May 1 16:05:11.146: INFO: Created: latency-svc-qn6s5 May 1 16:05:11.169: INFO: Got endpoints: latency-svc-qn6s5 [1.018688294s] May 1 16:05:11.270: INFO: Created: latency-svc-mtcds May 1 16:05:11.271: INFO: Got endpoints: latency-svc-mtcds [1.073546683s] May 1 16:05:11.485: INFO: Created: latency-svc-zzhqv May 1 16:05:11.487: INFO: Got endpoints: latency-svc-zzhqv [1.208859932s] May 1 16:05:11.548: INFO: Created: latency-svc-sw6zt May 1 16:05:11.563: INFO: Got endpoints: latency-svc-sw6zt [1.191150745s] May 1 16:05:11.647: INFO: Created: latency-svc-q7x8k May 1 16:05:11.649: INFO: Got endpoints: latency-svc-q7x8k [1.221524377s] May 1 16:05:11.716: INFO: Created: latency-svc-9xs2v May 1 16:05:11.737: INFO: Got endpoints: latency-svc-9xs2v [1.263142343s] May 1 16:05:11.803: INFO: Created: latency-svc-l94m7 May 1 16:05:11.846: INFO: Got endpoints: latency-svc-l94m7 [1.328400216s] May 1 16:05:12.177: INFO: Created: latency-svc-8pglc May 1 16:05:12.184: INFO: Got endpoints: latency-svc-8pglc [1.588876091s] May 1 16:05:12.421: INFO: Created: latency-svc-n4pkv May 1 16:05:12.471: INFO: Got endpoints: latency-svc-n4pkv [1.833400416s] May 1 16:05:12.606: INFO: Created: latency-svc-mfvq6 May 1 16:05:12.673: INFO: Got endpoints: latency-svc-mfvq6 [1.999655192s] May 1 16:05:12.878: INFO: Created: latency-svc-f8fg4 May 1 16:05:12.925: INFO: Got endpoints: latency-svc-f8fg4 [2.173795299s] May 1 16:05:13.054: INFO: Created: latency-svc-wjwlr May 1 16:05:13.056: INFO: Got endpoints: latency-svc-wjwlr [2.262087725s] May 1 16:05:13.118: INFO: Created: latency-svc-q5nf9 May 1 16:05:13.135: INFO: Got endpoints: latency-svc-q5nf9 [2.304333785s] May 1 16:05:13.257: INFO: Created: latency-svc-krmzg May 1 16:05:13.273: INFO: Got endpoints: latency-svc-krmzg [2.355104528s] May 1 16:05:13.521: INFO: Created: latency-svc-4vc5z May 1 16:05:13.538: INFO: Got endpoints: latency-svc-4vc5z [2.443655418s] May 1 16:05:13.767: INFO: Created: latency-svc-q99r4 May 1 16:05:13.772: INFO: Got endpoints: latency-svc-q99r4 [2.603568584s] May 1 16:05:14.334: INFO: Created: latency-svc-czvqr May 1 16:05:14.642: INFO: Got endpoints: latency-svc-czvqr [3.37081774s] May 1 16:05:14.803: INFO: Created: latency-svc-lkshm May 1 16:05:14.832: INFO: Got endpoints: latency-svc-lkshm [3.344278436s] May 1 16:05:14.966: INFO: Created: latency-svc-t58mx May 1 16:05:15.353: INFO: Got endpoints: latency-svc-t58mx [3.78982128s] May 1 16:05:15.570: INFO: Created: latency-svc-hfnrk May 1 16:05:15.820: INFO: Got endpoints: latency-svc-hfnrk [4.170608075s] May 1 16:05:15.882: INFO: Created: latency-svc-b7btr May 1 16:05:16.598: INFO: Got endpoints: latency-svc-b7btr [4.86076434s] May 1 16:05:16.602: INFO: Created: latency-svc-jb694 May 1 16:05:16.648: INFO: Got endpoints: latency-svc-jb694 [4.801940982s] May 1 16:05:17.011: INFO: Created: latency-svc-xgkww May 1 16:05:17.407: INFO: Got endpoints: latency-svc-xgkww [5.22220552s] May 1 16:05:17.701: INFO: Created: latency-svc-q7jpr May 1 16:05:17.720: INFO: Got endpoints: latency-svc-q7jpr [5.249238977s] May 1 16:05:18.121: INFO: Created: latency-svc-fpv92 May 1 16:05:18.264: INFO: Got endpoints: latency-svc-fpv92 [5.590114316s] May 1 16:05:18.470: INFO: Created: latency-svc-rgl9k May 1 16:05:18.749: INFO: Got endpoints: latency-svc-rgl9k [5.823636234s] May 1 16:05:18.924: INFO: Created: latency-svc-95hfr May 1 16:05:19.187: INFO: Created: latency-svc-jhmk2 May 1 16:05:19.189: INFO: Got endpoints: latency-svc-95hfr [6.132708259s] May 1 16:05:19.266: INFO: Got endpoints: latency-svc-jhmk2 [6.131615551s] May 1 16:05:19.414: INFO: Created: latency-svc-9psm7 May 1 16:05:19.422: INFO: Got endpoints: latency-svc-9psm7 [6.148671648s] May 1 16:05:19.472: INFO: Created: latency-svc-twlwk May 1 16:05:19.495: INFO: Got endpoints: latency-svc-twlwk [5.957272264s] May 1 16:05:19.599: INFO: Created: latency-svc-tvf4b May 1 16:05:19.602: INFO: Got endpoints: latency-svc-tvf4b [5.829707738s] May 1 16:05:19.662: INFO: Created: latency-svc-kmr8j May 1 16:05:20.144: INFO: Got endpoints: latency-svc-kmr8j [5.501562529s] May 1 16:05:20.420: INFO: Created: latency-svc-lvmtr May 1 16:05:20.423: INFO: Got endpoints: latency-svc-lvmtr [5.591338998s] May 1 16:05:20.678: INFO: Created: latency-svc-jszgp May 1 16:05:20.680: INFO: Got endpoints: latency-svc-jszgp [5.327088518s] May 1 16:05:21.076: INFO: Created: latency-svc-lnc6t May 1 16:05:21.227: INFO: Created: latency-svc-xt442 May 1 16:05:21.227: INFO: Got endpoints: latency-svc-lnc6t [5.406924861s] May 1 16:05:21.252: INFO: Got endpoints: latency-svc-xt442 [4.653625266s] May 1 16:05:21.319: INFO: Created: latency-svc-pq8l9 May 1 16:05:21.359: INFO: Got endpoints: latency-svc-pq8l9 [4.711075548s] May 1 16:05:21.370: INFO: Created: latency-svc-dcw5z May 1 16:05:21.406: INFO: Got endpoints: latency-svc-dcw5z [3.999510953s] May 1 16:05:21.449: INFO: Created: latency-svc-zvf7l May 1 16:05:21.514: INFO: Got endpoints: latency-svc-zvf7l [3.794309986s] May 1 16:05:21.516: INFO: Created: latency-svc-m745w May 1 16:05:21.528: INFO: Got endpoints: latency-svc-m745w [3.264650396s] May 1 16:05:21.564: INFO: Created: latency-svc-4gcft May 1 16:05:21.613: INFO: Got endpoints: latency-svc-4gcft [2.863942006s] May 1 16:05:21.894: INFO: Created: latency-svc-dmrw4 May 1 16:05:22.018: INFO: Got endpoints: latency-svc-dmrw4 [2.828689912s] May 1 16:05:22.079: INFO: Created: latency-svc-x6fzl May 1 16:05:22.239: INFO: Got endpoints: latency-svc-x6fzl [2.972715882s] May 1 16:05:22.267: INFO: Created: latency-svc-zb2sq May 1 16:05:22.320: INFO: Got endpoints: latency-svc-zb2sq [2.898094392s] May 1 16:05:22.426: INFO: Created: latency-svc-ndjbg May 1 16:05:22.448: INFO: Got endpoints: latency-svc-ndjbg [2.952464538s] May 1 16:05:22.466: INFO: Created: latency-svc-954wq May 1 16:05:22.483: INFO: Got endpoints: latency-svc-954wq [2.880463932s] May 1 16:05:22.517: INFO: Created: latency-svc-k6dnr May 1 16:05:22.610: INFO: Got endpoints: latency-svc-k6dnr [2.466165989s] May 1 16:05:22.656: INFO: Created: latency-svc-rhq88 May 1 16:05:22.958: INFO: Got endpoints: latency-svc-rhq88 [2.53509594s] May 1 16:05:23.203: INFO: Created: latency-svc-pdqzh May 1 16:05:23.419: INFO: Got endpoints: latency-svc-pdqzh [2.73878581s] May 1 16:05:24.309: INFO: Created: latency-svc-hwxdx May 1 16:05:24.309: INFO: Created: latency-svc-npbkp May 1 16:05:24.425: INFO: Got endpoints: latency-svc-npbkp [3.198361593s] May 1 16:05:24.426: INFO: Got endpoints: latency-svc-hwxdx [3.173637722s] May 1 16:05:24.477: INFO: Created: latency-svc-2rwvk May 1 16:05:24.509: INFO: Got endpoints: latency-svc-2rwvk [3.150342471s] May 1 16:05:24.605: INFO: Created: latency-svc-tjdbp May 1 16:05:24.635: INFO: Got endpoints: latency-svc-tjdbp [3.228658503s] May 1 16:05:24.669: INFO: Created: latency-svc-4rg4q May 1 16:05:24.696: INFO: Got endpoints: latency-svc-4rg4q [3.181458624s] May 1 16:05:24.838: INFO: Created: latency-svc-fp7dv May 1 16:05:24.846: INFO: Got endpoints: latency-svc-fp7dv [3.31735402s] May 1 16:05:24.891: INFO: Created: latency-svc-qt96h May 1 16:05:24.919: INFO: Got endpoints: latency-svc-qt96h [3.305669132s] May 1 16:05:24.988: INFO: Created: latency-svc-znlqz May 1 16:05:24.999: INFO: Got endpoints: latency-svc-znlqz [2.981440846s] May 1 16:05:25.035: INFO: Created: latency-svc-x4hm2 May 1 16:05:25.056: INFO: Got endpoints: latency-svc-x4hm2 [2.817015718s] May 1 16:05:25.168: INFO: Created: latency-svc-xdft2 May 1 16:05:25.171: INFO: Got endpoints: latency-svc-xdft2 [2.850304713s] May 1 16:05:25.204: INFO: Created: latency-svc-9qwgr May 1 16:05:25.231: INFO: Got endpoints: latency-svc-9qwgr [2.782614615s] May 1 16:05:25.250: INFO: Created: latency-svc-zbg7r May 1 16:05:25.266: INFO: Got endpoints: latency-svc-zbg7r [2.783724283s] May 1 16:05:25.318: INFO: Created: latency-svc-qmwdj May 1 16:05:25.320: INFO: Got endpoints: latency-svc-qmwdj [2.71011474s] May 1 16:05:25.348: INFO: Created: latency-svc-zhrx4 May 1 16:05:25.372: INFO: Got endpoints: latency-svc-zhrx4 [2.413345805s] May 1 16:05:25.408: INFO: Created: latency-svc-l52zc May 1 16:05:25.485: INFO: Got endpoints: latency-svc-l52zc [2.065471119s] May 1 16:05:25.488: INFO: Created: latency-svc-k59m6 May 1 16:05:25.495: INFO: Got endpoints: latency-svc-k59m6 [1.06988635s] May 1 16:05:25.546: INFO: Created: latency-svc-ptbcr May 1 16:05:25.556: INFO: Got endpoints: latency-svc-ptbcr [1.129717112s] May 1 16:05:25.664: INFO: Created: latency-svc-b47r6 May 1 16:05:25.676: INFO: Got endpoints: latency-svc-b47r6 [1.166524083s] May 1 16:05:25.706: INFO: Created: latency-svc-hn2ng May 1 16:05:25.724: INFO: Got endpoints: latency-svc-hn2ng [1.08896228s] May 1 16:05:25.808: INFO: Created: latency-svc-wx6nf May 1 16:05:25.832: INFO: Got endpoints: latency-svc-wx6nf [1.136246011s] May 1 16:05:25.870: INFO: Created: latency-svc-qw2ww May 1 16:05:25.886: INFO: Got endpoints: latency-svc-qw2ww [1.040590842s] May 1 16:05:26.006: INFO: Created: latency-svc-hfvqs May 1 16:05:26.008: INFO: Got endpoints: latency-svc-hfvqs [1.088985564s] May 1 16:05:26.410: INFO: Created: latency-svc-knqv6 May 1 16:05:26.574: INFO: Got endpoints: latency-svc-knqv6 [1.574843116s] May 1 16:05:26.804: INFO: Created: latency-svc-pmd4c May 1 16:05:26.807: INFO: Got endpoints: latency-svc-pmd4c [1.751149387s] May 1 16:05:27.037: INFO: Created: latency-svc-sz8pk May 1 16:05:27.058: INFO: Got endpoints: latency-svc-sz8pk [1.887650319s] May 1 16:05:27.101: INFO: Created: latency-svc-dbffh May 1 16:05:27.122: INFO: Got endpoints: latency-svc-dbffh [1.891283862s] May 1 16:05:27.198: INFO: Created: latency-svc-xqg4j May 1 16:05:27.204: INFO: Got endpoints: latency-svc-xqg4j [1.937504864s] May 1 16:05:27.285: INFO: Created: latency-svc-9kqhc May 1 16:05:27.329: INFO: Got endpoints: latency-svc-9kqhc [2.00883579s] May 1 16:05:27.363: INFO: Created: latency-svc-bj247 May 1 16:05:27.375: INFO: Got endpoints: latency-svc-bj247 [2.002862215s] May 1 16:05:27.426: INFO: Created: latency-svc-fz52p May 1 16:05:27.490: INFO: Got endpoints: latency-svc-fz52p [2.005569118s] May 1 16:05:27.521: INFO: Created: latency-svc-vz26k May 1 16:05:27.558: INFO: Got endpoints: latency-svc-vz26k [2.062927272s] May 1 16:05:27.930: INFO: Created: latency-svc-mrnkr May 1 16:05:28.016: INFO: Got endpoints: latency-svc-mrnkr [2.460779852s] May 1 16:05:28.158: INFO: Created: latency-svc-4wwtv May 1 16:05:28.199: INFO: Got endpoints: latency-svc-4wwtv [2.523226834s] May 1 16:05:28.564: INFO: Created: latency-svc-cbwl8 May 1 16:05:28.574: INFO: Got endpoints: latency-svc-cbwl8 [2.850244684s] May 1 16:05:28.625: INFO: Created: latency-svc-qmxjw May 1 16:05:28.640: INFO: Got endpoints: latency-svc-qmxjw [2.807346408s] May 1 16:05:28.744: INFO: Created: latency-svc-s47s6 May 1 16:05:28.982: INFO: Got endpoints: latency-svc-s47s6 [3.095664393s] May 1 16:05:29.186: INFO: Created: latency-svc-j8nls May 1 16:05:29.266: INFO: Got endpoints: latency-svc-j8nls [3.257914785s] May 1 16:05:29.266: INFO: Created: latency-svc-b7t96 May 1 16:05:29.350: INFO: Got endpoints: latency-svc-b7t96 [2.775504565s] May 1 16:05:29.394: INFO: Created: latency-svc-2bj5f May 1 16:05:29.409: INFO: Got endpoints: latency-svc-2bj5f [2.601643347s] May 1 16:05:29.546: INFO: Created: latency-svc-dqktr May 1 16:05:29.724: INFO: Got endpoints: latency-svc-dqktr [2.665581302s] May 1 16:05:29.764: INFO: Created: latency-svc-vs8kz May 1 16:05:29.822: INFO: Got endpoints: latency-svc-vs8kz [2.70017052s] May 1 16:05:29.879: INFO: Created: latency-svc-5nfhk May 1 16:05:29.906: INFO: Got endpoints: latency-svc-5nfhk [2.70199837s] May 1 16:05:29.932: INFO: Created: latency-svc-4gk9t May 1 16:05:29.942: INFO: Got endpoints: latency-svc-4gk9t [2.61264158s] May 1 16:05:30.012: INFO: Created: latency-svc-7cz9q May 1 16:05:30.015: INFO: Got endpoints: latency-svc-7cz9q [2.639993087s] May 1 16:05:30.095: INFO: Created: latency-svc-2k4h6 May 1 16:05:30.282: INFO: Got endpoints: latency-svc-2k4h6 [2.791455819s] May 1 16:05:30.290: INFO: Created: latency-svc-cvx2q May 1 16:05:30.333: INFO: Got endpoints: latency-svc-cvx2q [2.774308313s] May 1 16:05:30.444: INFO: Created: latency-svc-4zxfw May 1 16:05:30.446: INFO: Got endpoints: latency-svc-4zxfw [2.429154431s] May 1 16:05:30.499: INFO: Created: latency-svc-9j6nc May 1 16:05:30.513: INFO: Got endpoints: latency-svc-9j6nc [2.313918682s] May 1 16:05:30.606: INFO: Created: latency-svc-zd64l May 1 16:05:30.609: INFO: Got endpoints: latency-svc-zd64l [2.034239555s] May 1 16:05:30.649: INFO: Created: latency-svc-bt4jz May 1 16:05:30.688: INFO: Got endpoints: latency-svc-bt4jz [2.048002656s] May 1 16:05:30.820: INFO: Created: latency-svc-nsqnn May 1 16:05:30.833: INFO: Got endpoints: latency-svc-nsqnn [1.851075927s] May 1 16:05:30.888: INFO: Created: latency-svc-l5xnp May 1 16:05:30.970: INFO: Got endpoints: latency-svc-l5xnp [1.70396087s] May 1 16:05:30.984: INFO: Created: latency-svc-sxnn7 May 1 16:05:30.990: INFO: Got endpoints: latency-svc-sxnn7 [1.639752103s] May 1 16:05:31.050: INFO: Created: latency-svc-9qkbl May 1 16:05:31.125: INFO: Got endpoints: latency-svc-9qkbl [1.716100651s] May 1 16:05:31.152: INFO: Created: latency-svc-mv9wr May 1 16:05:31.162: INFO: Got endpoints: latency-svc-mv9wr [1.437870567s] May 1 16:05:31.182: INFO: Created: latency-svc-6krvg May 1 16:05:31.192: INFO: Got endpoints: latency-svc-6krvg [1.369776288s] May 1 16:05:31.192: INFO: Latencies: [119.965775ms 137.758759ms 164.086489ms 213.703796ms 293.839591ms 362.5812ms 700.154662ms 874.653927ms 951.522317ms 958.02165ms 969.038849ms 993.660218ms 1.018688294s 1.040590842s 1.040929878s 1.06988635s 1.071119097s 1.073546683s 1.08896228s 1.088985564s 1.112977836s 1.129717112s 1.136246011s 1.145358828s 1.15425655s 1.16597521s 1.166524083s 1.17916222s 1.179490907s 1.191150745s 1.208859932s 1.221524377s 1.239479507s 1.263142343s 1.317609036s 1.328400216s 1.333908298s 1.36962506s 1.369776288s 1.417131294s 1.437870567s 1.472834938s 1.493956938s 1.503709798s 1.569752019s 1.574843116s 1.588876091s 1.612631771s 1.639752103s 1.658088498s 1.703443804s 1.70396087s 1.704145322s 1.716100651s 1.716108872s 1.71745025s 1.718993133s 1.72156109s 1.730044457s 1.751149387s 1.781973052s 1.833400416s 1.847046566s 1.851075927s 1.86645708s 1.866761577s 1.884910879s 1.887650319s 1.891283862s 1.937504864s 1.999655192s 2.002862215s 2.005569118s 2.00883579s 2.034239555s 2.048002656s 2.053522416s 2.062927272s 2.065471119s 2.08699047s 2.10714403s 2.108537217s 2.135758623s 2.149839029s 2.159213514s 2.173795299s 2.186618165s 2.215657733s 2.23902689s 2.245318076s 2.26134207s 2.262087725s 2.267784606s 2.274979162s 2.286166981s 2.304333785s 2.313918682s 2.337002905s 2.354458616s 2.355104528s 2.357403482s 2.381103984s 2.382636002s 2.382677675s 2.409745466s 2.41045118s 2.413345805s 2.429154431s 2.437548351s 2.443655418s 2.455912233s 2.458088646s 2.460779852s 2.466165989s 2.471879052s 2.515680289s 2.523226834s 2.53509594s 2.550738663s 2.601643347s 2.603568584s 2.61264158s 2.639993087s 2.640025836s 2.654375451s 2.663721508s 2.665581302s 2.685998772s 2.70017052s 2.70199837s 2.71011474s 2.732673648s 2.73878581s 2.753068537s 2.773912714s 2.774308313s 2.775504565s 2.782614615s 2.783724283s 2.791455819s 2.807346408s 2.817015718s 2.828689912s 2.850244684s 2.850304713s 2.863942006s 2.876036298s 2.880463932s 2.898094392s 2.910289461s 2.938356725s 2.941931011s 2.952464538s 2.972715882s 2.981440846s 2.989937826s 2.992878076s 3.071389373s 3.095664393s 3.107077619s 3.123479292s 3.135825222s 3.150342471s 3.173637722s 3.181458624s 3.198361593s 3.228658503s 3.257914785s 3.264650396s 3.305669132s 3.31735402s 3.344278436s 3.37081774s 3.515628514s 3.52045944s 3.525695428s 3.555607751s 3.609953616s 3.626512775s 3.78982128s 3.794309986s 3.999510953s 4.170608075s 4.653625266s 4.711075548s 4.801940982s 4.86076434s 5.22220552s 5.249238977s 5.327088518s 5.406924861s 5.501562529s 5.590114316s 5.591338998s 5.823636234s 5.829707738s 5.957272264s 6.131615551s 6.132708259s 6.148671648s] May 1 16:05:31.192: INFO: 50 %ile: 2.357403482s May 1 16:05:31.192: INFO: 90 %ile: 3.794309986s May 1 16:05:31.192: INFO: 99 %ile: 6.132708259s May 1 16:05:31.192: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:05:31.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-rqzpf" for this suite. May 1 16:06:11.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:06:11.406: INFO: namespace: e2e-tests-svc-latency-rqzpf, resource: bindings, ignored listing per whitelist May 1 16:06:11.436: INFO: namespace e2e-tests-svc-latency-rqzpf deletion completed in 40.237115658s • [SLOW TEST:77.952 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:06:11.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 1 16:06:11.965: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-a,UID:ad3bff5f-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200871,Generation:0,CreationTimestamp:2020-05-01 16:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 16:06:11.965: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-a,UID:ad3bff5f-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200871,Generation:0,CreationTimestamp:2020-05-01 16:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 1 16:06:21.973: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-a,UID:ad3bff5f-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200891,Generation:0,CreationTimestamp:2020-05-01 16:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 1 16:06:21.973: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-a,UID:ad3bff5f-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200891,Generation:0,CreationTimestamp:2020-05-01 16:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 1 16:06:31.981: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-a,UID:ad3bff5f-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200911,Generation:0,CreationTimestamp:2020-05-01 16:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 16:06:31.981: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-a,UID:ad3bff5f-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200911,Generation:0,CreationTimestamp:2020-05-01 16:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 1 16:06:41.986: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-a,UID:ad3bff5f-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200931,Generation:0,CreationTimestamp:2020-05-01 16:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 16:06:41.986: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-a,UID:ad3bff5f-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200931,Generation:0,CreationTimestamp:2020-05-01 16:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 1 16:06:52.069: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-b,UID:c51a6b07-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200951,Generation:0,CreationTimestamp:2020-05-01 16:06:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 16:06:52.069: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-b,UID:c51a6b07-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200951,Generation:0,CreationTimestamp:2020-05-01 16:06:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 1 16:07:02.075: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-b,UID:c51a6b07-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200971,Generation:0,CreationTimestamp:2020-05-01 16:06:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 16:07:02.075: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7t4zh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7t4zh/configmaps/e2e-watch-test-configmap-b,UID:c51a6b07-8bc5-11ea-99e8-0242ac110002,ResourceVersion:8200971,Generation:0,CreationTimestamp:2020-05-01 16:06:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:07:12.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-7t4zh" for this suite. May 1 16:07:18.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:07:18.169: INFO: namespace: e2e-tests-watch-7t4zh, resource: bindings, ignored listing per whitelist May 1 16:07:18.192: INFO: namespace e2e-tests-watch-7t4zh deletion completed in 6.110743694s • [SLOW TEST:66.756 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:07:18.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 1 16:07:18.410: INFO: Waiting up to 5m0s for pod "client-containers-d4caa20a-8bc5-11ea-acf7-0242ac110017" in namespace "e2e-tests-containers-pf6wq" to be "success or failure" May 1 16:07:18.457: INFO: Pod "client-containers-d4caa20a-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 46.918902ms May 1 16:07:20.462: INFO: Pod "client-containers-d4caa20a-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051301662s May 1 16:07:22.465: INFO: Pod "client-containers-d4caa20a-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054809776s May 1 16:07:24.468: INFO: Pod "client-containers-d4caa20a-8bc5-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058003542s STEP: Saw pod success May 1 16:07:24.468: INFO: Pod "client-containers-d4caa20a-8bc5-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:07:24.471: INFO: Trying to get logs from node hunter-worker2 pod client-containers-d4caa20a-8bc5-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:07:24.650: INFO: Waiting for pod client-containers-d4caa20a-8bc5-11ea-acf7-0242ac110017 to disappear May 1 16:07:24.739: INFO: Pod client-containers-d4caa20a-8bc5-11ea-acf7-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:07:24.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-pf6wq" for this suite. May 1 16:07:32.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:07:32.902: INFO: namespace: e2e-tests-containers-pf6wq, resource: bindings, ignored listing per whitelist May 1 16:07:32.909: INFO: namespace e2e-tests-containers-pf6wq deletion completed in 8.165286473s • [SLOW TEST:14.717 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:07:32.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ddbb3a64-8bc5-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 16:07:33.321: INFO: Waiting up to 5m0s for pod "pod-secrets-ddbc3e7e-8bc5-11ea-acf7-0242ac110017" in namespace "e2e-tests-secrets-9z9n8" to be "success or failure" May 1 16:07:33.523: INFO: Pod "pod-secrets-ddbc3e7e-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 201.98528ms May 1 16:07:35.667: INFO: Pod "pod-secrets-ddbc3e7e-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34574386s May 1 16:07:37.670: INFO: Pod "pod-secrets-ddbc3e7e-8bc5-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.349308513s May 1 16:07:39.726: INFO: Pod "pod-secrets-ddbc3e7e-8bc5-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.405470809s STEP: Saw pod success May 1 16:07:39.726: INFO: Pod "pod-secrets-ddbc3e7e-8bc5-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:07:39.728: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ddbc3e7e-8bc5-11ea-acf7-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 16:07:40.848: INFO: Waiting for pod pod-secrets-ddbc3e7e-8bc5-11ea-acf7-0242ac110017 to disappear May 1 16:07:40.889: INFO: Pod pod-secrets-ddbc3e7e-8bc5-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:07:40.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9z9n8" for this suite. May 1 16:07:47.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:07:47.325: INFO: namespace: e2e-tests-secrets-9z9n8, resource: bindings, ignored listing per whitelist May 1 16:07:47.361: INFO: namespace e2e-tests-secrets-9z9n8 deletion completed in 6.468958274s • [SLOW TEST:14.451 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:07:47.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 1 16:07:47.443: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:07:47.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rcp7r" for this suite. May 1 16:07:53.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:07:53.574: INFO: namespace: e2e-tests-kubectl-rcp7r, resource: bindings, ignored listing per whitelist May 1 16:07:53.622: INFO: namespace e2e-tests-kubectl-rcp7r deletion completed in 6.095197742s • [SLOW TEST:6.261 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:07:53.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 1 16:07:53.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:07:53.826: INFO: Number of nodes with available pods: 0 May 1 16:07:53.826: INFO: Node hunter-worker is running more than one daemon pod May 1 16:07:54.831: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:07:54.834: INFO: Number of nodes with available pods: 0 May 1 16:07:54.834: INFO: Node hunter-worker is running more than one daemon pod May 1 16:07:56.142: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:07:56.422: INFO: Number of nodes with available pods: 0 May 1 16:07:56.422: INFO: Node hunter-worker is running more than one daemon pod May 1 16:07:56.830: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:07:56.833: INFO: Number of nodes with available pods: 0 May 1 16:07:56.833: INFO: Node hunter-worker is running more than one daemon pod May 1 16:07:57.831: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:07:57.834: INFO: Number of nodes with available pods: 0 May 1 16:07:57.835: INFO: Node hunter-worker is running more than one daemon pod May 1 16:07:58.919: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:07:58.974: INFO: Number of nodes with available pods: 0 May 1 16:07:58.974: INFO: Node hunter-worker is running more than one daemon pod May 1 16:07:59.831: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:07:59.835: INFO: Number of nodes with available pods: 0 May 1 16:07:59.835: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:00.830: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:00.834: INFO: Number of nodes with available pods: 2 May 1 16:08:00.834: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 1 16:08:00.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:00.863: INFO: Number of nodes with available pods: 1 May 1 16:08:00.863: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:01.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:01.871: INFO: Number of nodes with available pods: 1 May 1 16:08:01.871: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:02.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:02.872: INFO: Number of nodes with available pods: 1 May 1 16:08:02.872: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:03.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:03.871: INFO: Number of nodes with available pods: 1 May 1 16:08:03.871: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:04.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:04.871: INFO: Number of nodes with available pods: 1 May 1 16:08:04.871: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:05.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:05.870: INFO: Number of nodes with available pods: 1 May 1 16:08:05.870: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:06.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:06.870: INFO: Number of nodes with available pods: 1 May 1 16:08:06.870: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:07.875: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:07.877: INFO: Number of nodes with available pods: 1 May 1 16:08:07.877: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:08.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:08.872: INFO: Number of nodes with available pods: 1 May 1 16:08:08.872: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:09.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:09.872: INFO: Number of nodes with available pods: 1 May 1 16:08:09.872: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:10.869: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:10.872: INFO: Number of nodes with available pods: 1 May 1 16:08:10.872: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:11.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:11.871: INFO: Number of nodes with available pods: 1 May 1 16:08:11.871: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:12.974: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:12.977: INFO: Number of nodes with available pods: 1 May 1 16:08:12.977: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:13.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:13.871: INFO: Number of nodes with available pods: 1 May 1 16:08:13.871: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:14.879: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:14.882: INFO: Number of nodes with available pods: 1 May 1 16:08:14.882: INFO: Node hunter-worker is running more than one daemon pod May 1 16:08:15.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:08:15.871: INFO: Number of nodes with available pods: 2 May 1 16:08:15.871: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-l2t28, will wait for the garbage collector to delete the pods May 1 16:08:15.933: INFO: Deleting DaemonSet.extensions daemon-set took: 7.095648ms May 1 16:08:16.034: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.259921ms May 1 16:08:22.164: INFO: Number of nodes with available pods: 0 May 1 16:08:22.164: INFO: Number of running nodes: 0, number of available pods: 0 May 1 16:08:22.167: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-l2t28/daemonsets","resourceVersion":"8201243"},"items":null} May 1 16:08:22.169: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l2t28/pods","resourceVersion":"8201243"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:08:22.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-l2t28" for this suite. May 1 16:08:30.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:08:30.326: INFO: namespace: e2e-tests-daemonsets-l2t28, resource: bindings, ignored listing per whitelist May 1 16:08:30.339: INFO: namespace e2e-tests-daemonsets-l2t28 deletion completed in 8.157242918s • [SLOW TEST:36.716 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:08:30.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 16:08:30.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-cwgdm' May 1 16:08:37.374: INFO: stderr: "" May 1 16:08:37.374: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 1 16:08:42.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-cwgdm -o json' May 1 16:08:42.574: INFO: stderr: "" May 1 16:08:42.574: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-01T16:08:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-cwgdm\",\n \"resourceVersion\": \"8201316\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-cwgdm/pods/e2e-test-nginx-pod\",\n \"uid\": \"03e84cd2-8bc6-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-fk6dc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-fk6dc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-fk6dc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T16:08:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T16:08:42Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T16:08:42Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T16:08:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d9f80adef4b4b46e26f8723261d78eee6b6258a179b56dd91ef092878061944b\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-01T16:08:41Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.62\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-01T16:08:37Z\"\n }\n}\n" STEP: replace the image in the pod May 1 16:08:42.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-cwgdm' May 1 16:08:42.876: INFO: stderr: "" May 1 16:08:42.876: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 1 16:08:42.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-cwgdm' May 1 16:08:51.631: INFO: stderr: "" May 1 16:08:51.631: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:08:51.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cwgdm" for this suite. May 1 16:08:57.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:08:57.847: INFO: namespace: e2e-tests-kubectl-cwgdm, resource: bindings, ignored listing per whitelist May 1 16:08:57.883: INFO: namespace e2e-tests-kubectl-cwgdm deletion completed in 6.142429658s • [SLOW TEST:27.544 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:08:57.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 16:08:58.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-g772g' May 1 16:08:58.163: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 16:08:58.163: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 1 16:08:58.179: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-ntx9q] May 1 16:08:58.179: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-ntx9q" in namespace "e2e-tests-kubectl-g772g" to be "running and ready" May 1 16:08:58.214: INFO: Pod "e2e-test-nginx-rc-ntx9q": Phase="Pending", Reason="", readiness=false. Elapsed: 35.665137ms May 1 16:09:00.218: INFO: Pod "e2e-test-nginx-rc-ntx9q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038977772s May 1 16:09:02.555: INFO: Pod "e2e-test-nginx-rc-ntx9q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37601565s May 1 16:09:04.558: INFO: Pod "e2e-test-nginx-rc-ntx9q": Phase="Running", Reason="", readiness=true. Elapsed: 6.379889951s May 1 16:09:04.559: INFO: Pod "e2e-test-nginx-rc-ntx9q" satisfied condition "running and ready" May 1 16:09:04.559: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-ntx9q] May 1 16:09:04.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-g772g' May 1 16:09:04.824: INFO: stderr: "" May 1 16:09:04.824: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 1 16:09:04.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-g772g' May 1 16:09:05.007: INFO: stderr: "" May 1 16:09:05.007: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:09:05.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g772g" for this suite. May 1 16:09:27.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:09:27.430: INFO: namespace: e2e-tests-kubectl-g772g, resource: bindings, ignored listing per whitelist May 1 16:09:27.459: INFO: namespace e2e-tests-kubectl-g772g deletion completed in 22.44814402s • [SLOW TEST:29.576 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:09:27.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:09:32.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-dpzp4" for this suite. May 1 16:10:12.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:10:12.151: INFO: namespace: e2e-tests-kubelet-test-dpzp4, resource: bindings, ignored listing per whitelist May 1 16:10:12.235: INFO: namespace e2e-tests-kubelet-test-dpzp4 deletion completed in 40.110648656s • [SLOW TEST:44.776 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:10:12.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-vh7zq/secret-test-3cdc2570-8bc6-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 16:10:13.213: INFO: Waiting up to 5m0s for pod "pod-configmaps-3cdf899e-8bc6-11ea-acf7-0242ac110017" in namespace "e2e-tests-secrets-vh7zq" to be "success or failure" May 1 16:10:13.686: INFO: Pod "pod-configmaps-3cdf899e-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 472.785545ms May 1 16:10:15.690: INFO: Pod "pod-configmaps-3cdf899e-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47686196s May 1 16:10:17.726: INFO: Pod "pod-configmaps-3cdf899e-8bc6-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.512865821s STEP: Saw pod success May 1 16:10:17.726: INFO: Pod "pod-configmaps-3cdf899e-8bc6-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:10:17.729: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3cdf899e-8bc6-11ea-acf7-0242ac110017 container env-test: STEP: delete the pod May 1 16:10:18.065: INFO: Waiting for pod pod-configmaps-3cdf899e-8bc6-11ea-acf7-0242ac110017 to disappear May 1 16:10:18.127: INFO: Pod pod-configmaps-3cdf899e-8bc6-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:10:18.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vh7zq" for this suite. May 1 16:10:24.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:10:24.253: INFO: namespace: e2e-tests-secrets-vh7zq, resource: bindings, ignored listing per whitelist May 1 16:10:24.338: INFO: namespace e2e-tests-secrets-vh7zq deletion completed in 6.207359273s • [SLOW TEST:12.103 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:10:24.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 1 16:10:24.579: INFO: Waiting up to 5m0s for pod "pod-43cfb885-8bc6-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-46nt4" to be "success or failure" May 1 16:10:24.594: INFO: Pod "pod-43cfb885-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 14.295977ms May 1 16:10:26.598: INFO: Pod "pod-43cfb885-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018731225s May 1 16:10:28.603: INFO: Pod "pod-43cfb885-8bc6-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023202112s STEP: Saw pod success May 1 16:10:28.603: INFO: Pod "pod-43cfb885-8bc6-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:10:28.606: INFO: Trying to get logs from node hunter-worker pod pod-43cfb885-8bc6-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:10:28.700: INFO: Waiting for pod pod-43cfb885-8bc6-11ea-acf7-0242ac110017 to disappear May 1 16:10:28.729: INFO: Pod pod-43cfb885-8bc6-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:10:28.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-46nt4" for this suite. May 1 16:10:34.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:10:34.783: INFO: namespace: e2e-tests-emptydir-46nt4, resource: bindings, ignored listing per whitelist May 1 16:10:34.814: INFO: namespace e2e-tests-emptydir-46nt4 deletion completed in 6.081457441s • [SLOW TEST:10.476 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:10:34.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-tfwn STEP: Creating a pod to test atomic-volume-subpath May 1 16:10:34.974: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tfwn" in namespace "e2e-tests-subpath-nf9tk" to be "success or failure" May 1 16:10:35.100: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Pending", Reason="", readiness=false. Elapsed: 126.366987ms May 1 16:10:37.127: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15272559s May 1 16:10:39.226: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25214568s May 1 16:10:41.230: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255883273s May 1 16:10:43.234: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=true. Elapsed: 8.259947571s May 1 16:10:45.239: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=false. Elapsed: 10.264707534s May 1 16:10:47.244: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=false. Elapsed: 12.269418094s May 1 16:10:49.248: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=false. Elapsed: 14.274237375s May 1 16:10:51.252: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=false. Elapsed: 16.278327423s May 1 16:10:53.257: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=false. Elapsed: 18.282452431s May 1 16:10:55.261: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=false. Elapsed: 20.286495473s May 1 16:10:57.265: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=false. Elapsed: 22.290939483s May 1 16:10:59.269: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=false. Elapsed: 24.295336463s May 1 16:11:01.274: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Running", Reason="", readiness=false. Elapsed: 26.299595231s May 1 16:11:03.278: INFO: Pod "pod-subpath-test-projected-tfwn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.304301555s STEP: Saw pod success May 1 16:11:03.278: INFO: Pod "pod-subpath-test-projected-tfwn" satisfied condition "success or failure" May 1 16:11:03.284: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-tfwn container test-container-subpath-projected-tfwn: STEP: delete the pod May 1 16:11:03.627: INFO: Waiting for pod pod-subpath-test-projected-tfwn to disappear May 1 16:11:03.771: INFO: Pod pod-subpath-test-projected-tfwn no longer exists STEP: Deleting pod pod-subpath-test-projected-tfwn May 1 16:11:03.771: INFO: Deleting pod "pod-subpath-test-projected-tfwn" in namespace "e2e-tests-subpath-nf9tk" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:11:03.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-nf9tk" for this suite. May 1 16:11:11.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:11:11.857: INFO: namespace: e2e-tests-subpath-nf9tk, resource: bindings, ignored listing per whitelist May 1 16:11:11.867: INFO: namespace e2e-tests-subpath-nf9tk deletion completed in 8.089206835s • [SLOW TEST:37.053 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:11:11.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-6013f0a9-8bc6-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 16:11:12.015: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60166e03-8bc6-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-2z5jp" to be "success or failure" May 1 16:11:12.149: INFO: Pod "pod-projected-secrets-60166e03-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 133.894882ms May 1 16:11:14.153: INFO: Pod "pod-projected-secrets-60166e03-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138131657s May 1 16:11:16.157: INFO: Pod "pod-projected-secrets-60166e03-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141821736s May 1 16:11:18.162: INFO: Pod "pod-projected-secrets-60166e03-8bc6-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14670758s STEP: Saw pod success May 1 16:11:18.162: INFO: Pod "pod-projected-secrets-60166e03-8bc6-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:11:18.165: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-60166e03-8bc6-11ea-acf7-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 16:11:18.196: INFO: Waiting for pod pod-projected-secrets-60166e03-8bc6-11ea-acf7-0242ac110017 to disappear May 1 16:11:18.292: INFO: Pod pod-projected-secrets-60166e03-8bc6-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:11:18.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2z5jp" for this suite. May 1 16:11:24.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:11:24.380: INFO: namespace: e2e-tests-projected-2z5jp, resource: bindings, ignored listing per whitelist May 1 16:11:24.388: INFO: namespace e2e-tests-projected-2z5jp deletion completed in 6.091883167s • [SLOW TEST:12.521 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:11:24.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-vnhrm in namespace e2e-tests-proxy-nb8d9 I0501 16:11:24.634009 6 runners.go:184] Created replication controller with name: proxy-service-vnhrm, namespace: e2e-tests-proxy-nb8d9, replica count: 1 I0501 16:11:25.684473 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:11:26.684677 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:11:27.684850 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:11:28.685087 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:11:29.685449 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:11:30.685673 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:11:31.685895 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:11:32.686100 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:11:33.686348 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:11:34.686580 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:11:35.686843 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:11:36.687020 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:11:37.687207 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:11:38.687423 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:11:39.687606 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:11:40.687877 6 runners.go:184] proxy-service-vnhrm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 16:11:40.880: INFO: setup took 16.376891169s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 1 16:11:40.902: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-nb8d9/pods/proxy-service-vnhrm-pfh94/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 1 16:12:00.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-bwxk9 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 1 16:12:04.681: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0501 16:12:04.612405 1420 log.go:172] (0xc0001389a0) (0xc000962140) Create stream\nI0501 16:12:04.612457 1420 log.go:172] (0xc0001389a0) (0xc000962140) Stream added, broadcasting: 1\nI0501 16:12:04.614806 1420 log.go:172] (0xc0001389a0) Reply frame received for 1\nI0501 16:12:04.614853 1420 log.go:172] (0xc0001389a0) (0xc0009621e0) Create stream\nI0501 16:12:04.614866 1420 log.go:172] (0xc0001389a0) (0xc0009621e0) Stream added, broadcasting: 3\nI0501 16:12:04.615627 1420 log.go:172] (0xc0001389a0) Reply frame received for 3\nI0501 16:12:04.615664 1420 log.go:172] (0xc0001389a0) (0xc000868820) Create stream\nI0501 16:12:04.615675 1420 log.go:172] (0xc0001389a0) (0xc000868820) Stream added, broadcasting: 5\nI0501 16:12:04.616539 1420 log.go:172] (0xc0001389a0) Reply frame received for 5\nI0501 16:12:04.616590 1420 log.go:172] (0xc0001389a0) (0xc000a7e000) Create stream\nI0501 16:12:04.616617 1420 log.go:172] (0xc0001389a0) (0xc000a7e000) Stream added, broadcasting: 7\nI0501 16:12:04.617392 1420 log.go:172] (0xc0001389a0) Reply frame received for 7\nI0501 16:12:04.617556 1420 log.go:172] (0xc0009621e0) (3) Writing data frame\nI0501 16:12:04.617673 1420 log.go:172] (0xc0009621e0) (3) Writing data frame\nI0501 16:12:04.618368 1420 log.go:172] (0xc0001389a0) Data frame received for 5\nI0501 16:12:04.618379 1420 log.go:172] (0xc000868820) (5) Data frame handling\nI0501 16:12:04.618386 1420 log.go:172] (0xc000868820) (5) Data frame sent\nI0501 16:12:04.618758 1420 log.go:172] (0xc0001389a0) Data frame received for 5\nI0501 16:12:04.618769 1420 log.go:172] (0xc000868820) (5) Data frame handling\nI0501 16:12:04.618780 1420 log.go:172] (0xc000868820) (5) Data frame sent\nI0501 16:12:04.658271 1420 log.go:172] (0xc0001389a0) Data frame received for 5\nI0501 16:12:04.658305 1420 log.go:172] (0xc000868820) (5) Data frame handling\nI0501 16:12:04.658340 1420 log.go:172] (0xc0001389a0) Data frame received for 7\nI0501 16:12:04.658349 1420 log.go:172] (0xc000a7e000) (7) Data frame handling\nI0501 16:12:04.658472 1420 log.go:172] (0xc0001389a0) (0xc0009621e0) Stream removed, broadcasting: 3\nI0501 16:12:04.658504 1420 log.go:172] (0xc0001389a0) Data frame received for 1\nI0501 16:12:04.658519 1420 log.go:172] (0xc000962140) (1) Data frame handling\nI0501 16:12:04.658525 1420 log.go:172] (0xc000962140) (1) Data frame sent\nI0501 16:12:04.658530 1420 log.go:172] (0xc0001389a0) (0xc000962140) Stream removed, broadcasting: 1\nI0501 16:12:04.658599 1420 log.go:172] (0xc0001389a0) (0xc000962140) Stream removed, broadcasting: 1\nI0501 16:12:04.658612 1420 log.go:172] (0xc0001389a0) (0xc0009621e0) Stream removed, broadcasting: 3\nI0501 16:12:04.658619 1420 log.go:172] (0xc0001389a0) (0xc000868820) Stream removed, broadcasting: 5\nI0501 16:12:04.658733 1420 log.go:172] (0xc0001389a0) (0xc000a7e000) Stream removed, broadcasting: 7\nI0501 16:12:04.658906 1420 log.go:172] (0xc0001389a0) Go away received\n" May 1 16:12:04.681: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:12:06.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bwxk9" for this suite. May 1 16:12:15.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:12:15.139: INFO: namespace: e2e-tests-kubectl-bwxk9, resource: bindings, ignored listing per whitelist May 1 16:12:15.148: INFO: namespace e2e-tests-kubectl-bwxk9 deletion completed in 8.239643166s • [SLOW TEST:14.859 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:12:15.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 1 16:12:15.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gkxft' May 1 16:12:16.852: INFO: stderr: "" May 1 16:12:16.852: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 16:12:16.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gkxft' May 1 16:12:17.026: INFO: stderr: "" May 1 16:12:17.026: INFO: stdout: "update-demo-nautilus-fqfc5 " STEP: Replicas for name=update-demo: expected=2 actual=1 May 1 16:12:22.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gkxft' May 1 16:12:22.142: INFO: stderr: "" May 1 16:12:22.142: INFO: stdout: "update-demo-nautilus-fqfc5 update-demo-nautilus-w8zk6 " May 1 16:12:22.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fqfc5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gkxft' May 1 16:12:22.276: INFO: stderr: "" May 1 16:12:22.276: INFO: stdout: "true" May 1 16:12:22.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fqfc5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gkxft' May 1 16:12:22.547: INFO: stderr: "" May 1 16:12:22.547: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 16:12:22.547: INFO: validating pod update-demo-nautilus-fqfc5 May 1 16:12:22.781: INFO: got data: { "image": "nautilus.jpg" } May 1 16:12:22.781: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 16:12:22.781: INFO: update-demo-nautilus-fqfc5 is verified up and running May 1 16:12:22.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8zk6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gkxft' May 1 16:12:23.097: INFO: stderr: "" May 1 16:12:23.097: INFO: stdout: "true" May 1 16:12:23.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8zk6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gkxft' May 1 16:12:23.342: INFO: stderr: "" May 1 16:12:23.342: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 16:12:23.342: INFO: validating pod update-demo-nautilus-w8zk6 May 1 16:12:23.346: INFO: got data: { "image": "nautilus.jpg" } May 1 16:12:23.346: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 16:12:23.346: INFO: update-demo-nautilus-w8zk6 is verified up and running STEP: using delete to clean up resources May 1 16:12:23.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gkxft' May 1 16:12:23.825: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:12:23.825: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 1 16:12:23.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-gkxft' May 1 16:12:24.417: INFO: stderr: "No resources found.\n" May 1 16:12:24.417: INFO: stdout: "" May 1 16:12:24.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-gkxft -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 16:12:24.963: INFO: stderr: "" May 1 16:12:24.963: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:12:24.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gkxft" for this suite. May 1 16:12:47.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:12:47.652: INFO: namespace: e2e-tests-kubectl-gkxft, resource: bindings, ignored listing per whitelist May 1 16:12:47.686: INFO: namespace e2e-tests-kubectl-gkxft deletion completed in 22.457837907s • [SLOW TEST:32.538 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:12:47.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 1 16:12:47.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:48.185: INFO: stderr: "" May 1 16:12:48.185: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 16:12:48.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:48.314: INFO: stderr: "" May 1 16:12:48.314: INFO: stdout: "update-demo-nautilus-b6smw update-demo-nautilus-vktng " May 1 16:12:48.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6smw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:48.455: INFO: stderr: "" May 1 16:12:48.455: INFO: stdout: "" May 1 16:12:48.455: INFO: update-demo-nautilus-b6smw is created but not running May 1 16:12:53.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:53.558: INFO: stderr: "" May 1 16:12:53.558: INFO: stdout: "update-demo-nautilus-b6smw update-demo-nautilus-vktng " May 1 16:12:53.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6smw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:53.671: INFO: stderr: "" May 1 16:12:53.671: INFO: stdout: "" May 1 16:12:53.671: INFO: update-demo-nautilus-b6smw is created but not running May 1 16:12:58.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:58.785: INFO: stderr: "" May 1 16:12:58.785: INFO: stdout: "update-demo-nautilus-b6smw update-demo-nautilus-vktng " May 1 16:12:58.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6smw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:58.888: INFO: stderr: "" May 1 16:12:58.888: INFO: stdout: "true" May 1 16:12:58.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6smw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:58.992: INFO: stderr: "" May 1 16:12:58.992: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 16:12:58.992: INFO: validating pod update-demo-nautilus-b6smw May 1 16:12:58.996: INFO: got data: { "image": "nautilus.jpg" } May 1 16:12:58.996: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 16:12:58.996: INFO: update-demo-nautilus-b6smw is verified up and running May 1 16:12:58.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vktng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:59.085: INFO: stderr: "" May 1 16:12:59.085: INFO: stdout: "true" May 1 16:12:59.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vktng -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:12:59.179: INFO: stderr: "" May 1 16:12:59.179: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 16:12:59.179: INFO: validating pod update-demo-nautilus-vktng May 1 16:12:59.183: INFO: got data: { "image": "nautilus.jpg" } May 1 16:12:59.183: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 16:12:59.183: INFO: update-demo-nautilus-vktng is verified up and running STEP: scaling down the replication controller May 1 16:12:59.185: INFO: scanned /root for discovery docs: May 1 16:12:59.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:00.357: INFO: stderr: "" May 1 16:13:00.357: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 16:13:00.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:00.461: INFO: stderr: "" May 1 16:13:00.461: INFO: stdout: "update-demo-nautilus-b6smw update-demo-nautilus-vktng " STEP: Replicas for name=update-demo: expected=1 actual=2 May 1 16:13:05.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:05.667: INFO: stderr: "" May 1 16:13:05.667: INFO: stdout: "update-demo-nautilus-vktng " May 1 16:13:05.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vktng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:05.849: INFO: stderr: "" May 1 16:13:05.849: INFO: stdout: "true" May 1 16:13:05.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vktng -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:05.946: INFO: stderr: "" May 1 16:13:05.946: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 16:13:05.946: INFO: validating pod update-demo-nautilus-vktng May 1 16:13:05.949: INFO: got data: { "image": "nautilus.jpg" } May 1 16:13:05.949: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 16:13:05.949: INFO: update-demo-nautilus-vktng is verified up and running STEP: scaling up the replication controller May 1 16:13:05.950: INFO: scanned /root for discovery docs: May 1 16:13:05.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:07.185: INFO: stderr: "" May 1 16:13:07.185: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 16:13:07.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:07.323: INFO: stderr: "" May 1 16:13:07.323: INFO: stdout: "update-demo-nautilus-c9dfb update-demo-nautilus-vktng " May 1 16:13:07.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dfb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:07.460: INFO: stderr: "" May 1 16:13:07.460: INFO: stdout: "" May 1 16:13:07.460: INFO: update-demo-nautilus-c9dfb is created but not running May 1 16:13:12.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:12.567: INFO: stderr: "" May 1 16:13:12.567: INFO: stdout: "update-demo-nautilus-c9dfb update-demo-nautilus-vktng " May 1 16:13:12.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dfb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:12.666: INFO: stderr: "" May 1 16:13:12.666: INFO: stdout: "true" May 1 16:13:12.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dfb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:12.780: INFO: stderr: "" May 1 16:13:12.780: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 16:13:12.780: INFO: validating pod update-demo-nautilus-c9dfb May 1 16:13:12.785: INFO: got data: { "image": "nautilus.jpg" } May 1 16:13:12.785: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 16:13:12.785: INFO: update-demo-nautilus-c9dfb is verified up and running May 1 16:13:12.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vktng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:12.883: INFO: stderr: "" May 1 16:13:12.883: INFO: stdout: "true" May 1 16:13:12.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vktng -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:12.989: INFO: stderr: "" May 1 16:13:12.989: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 16:13:12.989: INFO: validating pod update-demo-nautilus-vktng May 1 16:13:12.993: INFO: got data: { "image": "nautilus.jpg" } May 1 16:13:12.993: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 16:13:12.993: INFO: update-demo-nautilus-vktng is verified up and running STEP: using delete to clean up resources May 1 16:13:12.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:13.105: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:13:13.105: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 1 16:13:13.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-mdjmf' May 1 16:13:13.207: INFO: stderr: "No resources found.\n" May 1 16:13:13.207: INFO: stdout: "" May 1 16:13:13.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-mdjmf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 16:13:13.336: INFO: stderr: "" May 1 16:13:13.336: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:13:13.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mdjmf" for this suite. May 1 16:13:37.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:13:37.685: INFO: namespace: e2e-tests-kubectl-mdjmf, resource: bindings, ignored listing per whitelist May 1 16:13:37.707: INFO: namespace e2e-tests-kubectl-mdjmf deletion completed in 24.36835763s • [SLOW TEST:50.021 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:13:37.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-b7179d30-8bc6-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 16:13:38.057: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b717e9af-8bc6-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-9wrfh" to be "success or failure" May 1 16:13:38.259: INFO: Pod "pod-projected-secrets-b717e9af-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 201.584753ms May 1 16:13:40.282: INFO: Pod "pod-projected-secrets-b717e9af-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225502025s May 1 16:13:42.287: INFO: Pod "pod-projected-secrets-b717e9af-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229927375s May 1 16:13:44.291: INFO: Pod "pod-projected-secrets-b717e9af-8bc6-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.234323042s STEP: Saw pod success May 1 16:13:44.291: INFO: Pod "pod-projected-secrets-b717e9af-8bc6-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:13:44.294: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-b717e9af-8bc6-11ea-acf7-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 16:13:44.673: INFO: Waiting for pod pod-projected-secrets-b717e9af-8bc6-11ea-acf7-0242ac110017 to disappear May 1 16:13:44.706: INFO: Pod pod-projected-secrets-b717e9af-8bc6-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:13:44.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9wrfh" for this suite. May 1 16:13:52.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:13:52.997: INFO: namespace: e2e-tests-projected-9wrfh, resource: bindings, ignored listing per whitelist May 1 16:13:53.009: INFO: namespace e2e-tests-projected-9wrfh deletion completed in 8.299455698s • [SLOW TEST:15.301 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:13:53.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:13:53.516: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:14:01.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-psjgq" for this suite. May 1 16:14:43.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:14:43.660: INFO: namespace: e2e-tests-pods-psjgq, resource: bindings, ignored listing per whitelist May 1 16:14:43.668: INFO: namespace e2e-tests-pods-psjgq deletion completed in 42.103608423s • [SLOW TEST:50.659 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:14:43.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0501 16:14:53.960593 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:14:53.960: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:14:53.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-s82dp" for this suite. May 1 16:15:02.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:15:02.065: INFO: namespace: e2e-tests-gc-s82dp, resource: bindings, ignored listing per whitelist May 1 16:15:02.097: INFO: namespace e2e-tests-gc-s82dp deletion completed in 8.133910815s • [SLOW TEST:18.429 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:15:02.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 16:15:02.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e956bdfc-8bc6-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-lhn9j" to be "success or failure" May 1 16:15:02.319: INFO: Pod "downwardapi-volume-e956bdfc-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.86614ms May 1 16:15:04.836: INFO: Pod "downwardapi-volume-e956bdfc-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.520199991s May 1 16:15:06.840: INFO: Pod "downwardapi-volume-e956bdfc-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.524051673s May 1 16:15:08.845: INFO: Pod "downwardapi-volume-e956bdfc-8bc6-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.528438416s STEP: Saw pod success May 1 16:15:08.845: INFO: Pod "downwardapi-volume-e956bdfc-8bc6-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:15:08.848: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e956bdfc-8bc6-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 16:15:09.103: INFO: Waiting for pod downwardapi-volume-e956bdfc-8bc6-11ea-acf7-0242ac110017 to disappear May 1 16:15:09.121: INFO: Pod downwardapi-volume-e956bdfc-8bc6-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:15:09.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lhn9j" for this suite. May 1 16:15:17.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:15:17.862: INFO: namespace: e2e-tests-downward-api-lhn9j, resource: bindings, ignored listing per whitelist May 1 16:15:17.869: INFO: namespace e2e-tests-downward-api-lhn9j deletion completed in 8.74500172s • [SLOW TEST:15.772 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:15:17.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f331e784-8bc6-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 16:15:19.126: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-khfgq" to be "success or failure" May 1 16:15:19.171: INFO: Pod "pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 44.566661ms May 1 16:15:21.174: INFO: Pod "pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048257282s May 1 16:15:23.178: INFO: Pod "pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052186107s May 1 16:15:25.181: INFO: Pod "pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055247803s May 1 16:15:27.188: INFO: Pod "pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062155979s STEP: Saw pod success May 1 16:15:27.188: INFO: Pod "pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:15:27.191: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 16:15:27.412: INFO: Waiting for pod pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017 to disappear May 1 16:15:27.501: INFO: Pod pod-projected-configmaps-f339e252-8bc6-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:15:27.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-khfgq" for this suite. May 1 16:15:33.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:15:34.141: INFO: namespace: e2e-tests-projected-khfgq, resource: bindings, ignored listing per whitelist May 1 16:15:34.166: INFO: namespace e2e-tests-projected-khfgq deletion completed in 6.660498198s • [SLOW TEST:16.296 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:15:34.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 1 16:15:49.540: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 16:15:49.966: INFO: Pod pod-with-poststart-http-hook still exists May 1 16:15:51.966: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 16:15:52.249: INFO: Pod pod-with-poststart-http-hook still exists May 1 16:15:53.966: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 16:15:53.998: INFO: Pod pod-with-poststart-http-hook still exists May 1 16:15:55.966: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 16:15:55.970: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:15:55.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mllq9" for this suite. May 1 16:16:22.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:16:22.599: INFO: namespace: e2e-tests-container-lifecycle-hook-mllq9, resource: bindings, ignored listing per whitelist May 1 16:16:22.899: INFO: namespace e2e-tests-container-lifecycle-hook-mllq9 deletion completed in 26.924748122s • [SLOW TEST:48.733 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:16:22.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:16:23.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-87fs9" for this suite. May 1 16:16:30.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:16:30.536: INFO: namespace: e2e-tests-services-87fs9, resource: bindings, ignored listing per whitelist May 1 16:16:30.714: INFO: namespace e2e-tests-services-87fs9 deletion completed in 6.853752649s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:7.815 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:16:30.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-pjcx STEP: Creating a pod to test atomic-volume-subpath May 1 16:16:31.803: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pjcx" in namespace "e2e-tests-subpath-g5qj5" to be "success or failure" May 1 16:16:31.830: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Pending", Reason="", readiness=false. Elapsed: 26.385937ms May 1 16:16:33.838: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034342834s May 1 16:16:35.866: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063164129s May 1 16:16:37.869: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066047271s May 1 16:16:39.903: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099417434s May 1 16:16:42.070: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.266915836s May 1 16:16:44.381: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Running", Reason="", readiness=false. Elapsed: 12.578121282s May 1 16:16:46.386: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Running", Reason="", readiness=false. Elapsed: 14.582373802s May 1 16:16:48.420: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Running", Reason="", readiness=false. Elapsed: 16.61672738s May 1 16:16:50.477: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Running", Reason="", readiness=false. Elapsed: 18.674215599s May 1 16:16:53.262: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Running", Reason="", readiness=false. Elapsed: 21.458513366s May 1 16:16:55.266: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Running", Reason="", readiness=false. Elapsed: 23.462875017s May 1 16:16:57.270: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Running", Reason="", readiness=false. Elapsed: 25.466339116s May 1 16:16:59.274: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Running", Reason="", readiness=false. Elapsed: 27.470539483s May 1 16:17:01.278: INFO: Pod "pod-subpath-test-configmap-pjcx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.474768446s STEP: Saw pod success May 1 16:17:01.278: INFO: Pod "pod-subpath-test-configmap-pjcx" satisfied condition "success or failure" May 1 16:17:01.281: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-pjcx container test-container-subpath-configmap-pjcx: STEP: delete the pod May 1 16:17:02.820: INFO: Waiting for pod pod-subpath-test-configmap-pjcx to disappear May 1 16:17:03.500: INFO: Pod pod-subpath-test-configmap-pjcx no longer exists STEP: Deleting pod pod-subpath-test-configmap-pjcx May 1 16:17:03.500: INFO: Deleting pod "pod-subpath-test-configmap-pjcx" in namespace "e2e-tests-subpath-g5qj5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:17:03.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-g5qj5" for this suite. May 1 16:17:14.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:17:14.166: INFO: namespace: e2e-tests-subpath-g5qj5, resource: bindings, ignored listing per whitelist May 1 16:17:14.321: INFO: namespace e2e-tests-subpath-g5qj5 deletion completed in 10.563621214s • [SLOW TEST:43.606 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:17:14.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 1 16:17:18.804: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-3831ef6f-8bc7-11ea-acf7-0242ac110017,GenerateName:,Namespace:e2e-tests-events-zrjs4,SelfLink:/api/v1/namespaces/e2e-tests-events-zrjs4/pods/send-events-3831ef6f-8bc7-11ea-acf7-0242ac110017,UID:384dcaa4-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8202922,Generation:0,CreationTimestamp:2020-05-01 16:17:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 577095101,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-b9lvz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b9lvz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-b9lvz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002750c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002750c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:17:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:17:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:17:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:17:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.71,StartTime:2020-05-01 16:17:14 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-01 16:17:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://cc0589014e04c0dd8151f3107259f23243effb8c8c3ffc8abe10f9bd3e0d989e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 1 16:17:20.809: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 1 16:17:22.813: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:17:22.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-zrjs4" for this suite. May 1 16:18:02.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:18:03.694: INFO: namespace: e2e-tests-events-zrjs4, resource: bindings, ignored listing per whitelist May 1 16:18:03.715: INFO: namespace e2e-tests-events-zrjs4 deletion completed in 40.868531269s • [SLOW TEST:49.393 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:18:03.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-55d4e70b-8bc7-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 16:18:04.401: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-55dfe8af-8bc7-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-fzqtg" to be "success or failure" May 1 16:18:04.463: INFO: Pod "pod-projected-secrets-55dfe8af-8bc7-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 62.130133ms May 1 16:18:06.526: INFO: Pod "pod-projected-secrets-55dfe8af-8bc7-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125019294s May 1 16:18:08.530: INFO: Pod "pod-projected-secrets-55dfe8af-8bc7-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.128980189s May 1 16:18:10.535: INFO: Pod "pod-projected-secrets-55dfe8af-8bc7-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133901687s STEP: Saw pod success May 1 16:18:10.535: INFO: Pod "pod-projected-secrets-55dfe8af-8bc7-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:18:10.539: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-55dfe8af-8bc7-11ea-acf7-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 16:18:10.560: INFO: Waiting for pod pod-projected-secrets-55dfe8af-8bc7-11ea-acf7-0242ac110017 to disappear May 1 16:18:10.565: INFO: Pod pod-projected-secrets-55dfe8af-8bc7-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:18:10.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fzqtg" for this suite. May 1 16:18:16.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:18:16.640: INFO: namespace: e2e-tests-projected-fzqtg, resource: bindings, ignored listing per whitelist May 1 16:18:16.701: INFO: namespace e2e-tests-projected-fzqtg deletion completed in 6.13351028s • [SLOW TEST:12.986 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:18:16.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 1 16:18:24.908: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:24.928: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:26.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:26.932: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:28.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:28.933: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:30.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:30.932: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:32.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:32.951: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:34.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:34.933: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:36.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:36.932: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:38.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:38.932: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:40.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:40.932: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:42.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:42.933: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:44.929: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:44.932: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:46.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:46.933: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:48.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:48.933: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:50.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:50.933: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:18:52.928: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:18:52.932: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:18:52.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4rxmm" for this suite. May 1 16:19:14.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:19:14.971: INFO: namespace: e2e-tests-container-lifecycle-hook-4rxmm, resource: bindings, ignored listing per whitelist May 1 16:19:15.041: INFO: namespace e2e-tests-container-lifecycle-hook-4rxmm deletion completed in 22.105357516s • [SLOW TEST:58.340 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:19:15.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:19:15.178: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.492666ms) May 1 16:19:15.182: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.731984ms) May 1 16:19:15.185: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.378419ms) May 1 16:19:15.189: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.911581ms) May 1 16:19:15.192: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.969797ms) May 1 16:19:15.196: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.521081ms) May 1 16:19:15.199: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.736778ms) May 1 16:19:15.203: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.473558ms) May 1 16:19:15.206: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.534375ms) May 1 16:19:15.210: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.873588ms) May 1 16:19:15.214: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.503357ms) May 1 16:19:15.217: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.184803ms) May 1 16:19:15.240: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 22.473917ms) May 1 16:19:15.244: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.200668ms) May 1 16:19:15.248: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.812154ms) May 1 16:19:15.251: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.165815ms) May 1 16:19:15.254: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.148974ms) May 1 16:19:15.257: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.13784ms) May 1 16:19:15.260: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.615206ms) May 1 16:19:15.263: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.010617ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:19:15.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-vjrqk" for this suite. May 1 16:19:21.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:19:21.352: INFO: namespace: e2e-tests-proxy-vjrqk, resource: bindings, ignored listing per whitelist May 1 16:19:21.363: INFO: namespace e2e-tests-proxy-vjrqk deletion completed in 6.09701575s • [SLOW TEST:6.322 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:19:21.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 1 16:19:21.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-29s64' May 1 16:19:24.183: INFO: stderr: "" May 1 16:19:24.183: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 16:19:24.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-29s64' May 1 16:19:24.312: INFO: stderr: "" May 1 16:19:24.312: INFO: stdout: "update-demo-nautilus-kknjb update-demo-nautilus-tltvf " May 1 16:19:24.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kknjb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29s64' May 1 16:19:24.419: INFO: stderr: "" May 1 16:19:24.419: INFO: stdout: "" May 1 16:19:24.419: INFO: update-demo-nautilus-kknjb is created but not running May 1 16:19:29.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-29s64' May 1 16:19:29.528: INFO: stderr: "" May 1 16:19:29.529: INFO: stdout: "update-demo-nautilus-kknjb update-demo-nautilus-tltvf " May 1 16:19:29.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kknjb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29s64' May 1 16:19:29.647: INFO: stderr: "" May 1 16:19:29.647: INFO: stdout: "true" May 1 16:19:29.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kknjb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29s64' May 1 16:19:29.744: INFO: stderr: "" May 1 16:19:29.744: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 16:19:29.744: INFO: validating pod update-demo-nautilus-kknjb May 1 16:19:29.748: INFO: got data: { "image": "nautilus.jpg" } May 1 16:19:29.748: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 16:19:29.748: INFO: update-demo-nautilus-kknjb is verified up and running May 1 16:19:29.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tltvf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29s64' May 1 16:19:29.917: INFO: stderr: "" May 1 16:19:29.917: INFO: stdout: "true" May 1 16:19:29.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tltvf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29s64' May 1 16:19:30.103: INFO: stderr: "" May 1 16:19:30.103: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 16:19:30.103: INFO: validating pod update-demo-nautilus-tltvf May 1 16:19:30.108: INFO: got data: { "image": "nautilus.jpg" } May 1 16:19:30.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 16:19:30.108: INFO: update-demo-nautilus-tltvf is verified up and running STEP: rolling-update to new replication controller May 1 16:19:30.110: INFO: scanned /root for discovery docs: May 1 16:19:30.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-29s64' May 1 16:19:53.747: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 1 16:19:53.747: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 16:19:53.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-29s64' May 1 16:19:53.926: INFO: stderr: "" May 1 16:19:53.926: INFO: stdout: "update-demo-kitten-ng6z5 update-demo-kitten-sfkwz " May 1 16:19:53.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ng6z5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29s64' May 1 16:19:54.042: INFO: stderr: "" May 1 16:19:54.042: INFO: stdout: "true" May 1 16:19:54.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ng6z5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29s64' May 1 16:19:54.145: INFO: stderr: "" May 1 16:19:54.145: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 1 16:19:54.145: INFO: validating pod update-demo-kitten-ng6z5 May 1 16:19:54.180: INFO: got data: { "image": "kitten.jpg" } May 1 16:19:54.180: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 1 16:19:54.180: INFO: update-demo-kitten-ng6z5 is verified up and running May 1 16:19:54.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sfkwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29s64' May 1 16:19:54.294: INFO: stderr: "" May 1 16:19:54.294: INFO: stdout: "true" May 1 16:19:54.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sfkwz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29s64' May 1 16:19:54.402: INFO: stderr: "" May 1 16:19:54.402: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 1 16:19:54.402: INFO: validating pod update-demo-kitten-sfkwz May 1 16:19:54.407: INFO: got data: { "image": "kitten.jpg" } May 1 16:19:54.407: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 1 16:19:54.407: INFO: update-demo-kitten-sfkwz is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:19:54.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-29s64" for this suite. May 1 16:20:18.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:20:18.498: INFO: namespace: e2e-tests-kubectl-29s64, resource: bindings, ignored listing per whitelist May 1 16:20:18.540: INFO: namespace e2e-tests-kubectl-29s64 deletion completed in 24.129370268s • [SLOW TEST:57.176 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:20:18.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:20:18.622: INFO: Creating deployment "nginx-deployment" May 1 16:20:18.635: INFO: Waiting for observed generation 1 May 1 16:20:20.673: INFO: Waiting for all required pods to come up May 1 16:20:20.677: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 1 16:20:30.787: INFO: Waiting for deployment "nginx-deployment" to complete May 1 16:20:30.809: INFO: Updating deployment "nginx-deployment" with a non-existent image May 1 16:20:30.816: INFO: Updating deployment nginx-deployment May 1 16:20:30.816: INFO: Waiting for observed generation 2 May 1 16:20:33.022: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 1 16:20:33.845: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 1 16:20:34.448: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 1 16:20:34.701: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 1 16:20:34.701: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 1 16:20:34.703: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 1 16:20:34.707: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 1 16:20:34.707: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 1 16:20:34.712: INFO: Updating deployment nginx-deployment May 1 16:20:34.712: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 1 16:20:34.889: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 1 16:20:34.954: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 16:20:35.194: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dpjsd/deployments/nginx-deployment,UID:a5e52364-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203712,Generation:3,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-01 16:20:32 +0000 UTC 2020-05-01 16:20:18 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-01 16:20:34 +0000 UTC 2020-05-01 16:20:34 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 1 16:20:35.322: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dpjsd/replicasets/nginx-deployment-5c98f8fb5,UID:ad29d910-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203744,Generation:3,CreationTimestamp:2020-05-01 16:20:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a5e52364-8bc7-11ea-99e8-0242ac110002 0xc001b20967 0xc001b20968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 16:20:35.322: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 1 16:20:35.322: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dpjsd/replicasets/nginx-deployment-85ddf47c5d,UID:a5e82772-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203754,Generation:3,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a5e52364-8bc7-11ea-99e8-0242ac110002 0xc001b20a37 0xc001b20a38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 1 16:20:35.383: INFO: Pod "nginx-deployment-5c98f8fb5-65bwq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-65bwq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-65bwq,UID:ad33e590-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203648,Generation:0,CreationTimestamp:2020-05-01 16:20:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001dbb9c7 0xc001dbb9c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001dbba80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001dbbaa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 16:20:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.383: INFO: Pod "nginx-deployment-5c98f8fb5-67cq8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-67cq8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-67cq8,UID:ad94a854-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203677,Generation:0,CreationTimestamp:2020-05-01 16:20:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001dbbb60 0xc001dbbb61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001dbbc10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001dbbc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 16:20:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.383: INFO: Pod "nginx-deployment-5c98f8fb5-dh8nz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dh8nz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-dh8nz,UID:af9b64d6-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203723,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001dbbe40 0xc001dbbe41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910000} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.384: INFO: Pod "nginx-deployment-5c98f8fb5-fql9p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fql9p,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-fql9p,UID:afab7c30-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203742,Generation:0,CreationTimestamp:2020-05-01 16:20:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001910097 0xc001910098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910110} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.384: INFO: Pod "nginx-deployment-5c98f8fb5-g9qct" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g9qct,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-g9qct,UID:ad89ec39-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203676,Generation:0,CreationTimestamp:2020-05-01 16:20:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc0019101a7 0xc0019101a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910220} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 16:20:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.384: INFO: Pod "nginx-deployment-5c98f8fb5-hp2m4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hp2m4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-hp2m4,UID:ad4367f7-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203655,Generation:0,CreationTimestamp:2020-05-01 16:20:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001910300 0xc001910301}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910380} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019103a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 16:20:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.384: INFO: Pod "nginx-deployment-5c98f8fb5-hvr8p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hvr8p,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-hvr8p,UID:afa1bd3b-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203738,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001910460 0xc001910461}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019104e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.385: INFO: Pod "nginx-deployment-5c98f8fb5-j5s5x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-j5s5x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-j5s5x,UID:afa1bf5a-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203739,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001910577 0xc001910578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019105f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.385: INFO: Pod "nginx-deployment-5c98f8fb5-lvhsb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lvhsb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-lvhsb,UID:ad436b10-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203651,Generation:0,CreationTimestamp:2020-05-01 16:20:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001910687 0xc001910688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910700} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 16:20:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.385: INFO: Pod "nginx-deployment-5c98f8fb5-m69k9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-m69k9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-m69k9,UID:afa1ca1f-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203740,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc0019107e0 0xc0019107e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910860} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.385: INFO: Pod "nginx-deployment-5c98f8fb5-qd94s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qd94s,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-qd94s,UID:afa1d1ac-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203741,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc0019108f7 0xc0019108f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910970} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.386: INFO: Pod "nginx-deployment-5c98f8fb5-vqvtx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vqvtx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-vqvtx,UID:af9b1084-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203721,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001910a07 0xc001910a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.386: INFO: Pod "nginx-deployment-5c98f8fb5-zsnks" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zsnks,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-5c98f8fb5-zsnks,UID:af975679-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203753,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad29d910-8bc7-11ea-99e8-0242ac110002 0xc001910b17 0xc001910b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 16:20:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.386: INFO: Pod "nginx-deployment-85ddf47c5d-677hc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-677hc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-677hc,UID:a5fa26dc-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203622,Generation:0,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc001910c70 0xc001910c71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.79,StartTime:2020-05-01 16:20:19 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:20:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0e78d9ad15fb2fa5939b9000d5efdedc94cb2251dff07054d32d8833df5c0aab}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.386: INFO: Pod "nginx-deployment-85ddf47c5d-6l77z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6l77z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-6l77z,UID:af96c17a-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203746,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc001910dd7 0xc001910dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001910ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001910f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 16:20:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.387: INFO: Pod "nginx-deployment-85ddf47c5d-7fgjq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7fgjq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-7fgjq,UID:a5f4f548-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203579,Generation:0,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc001910fc7 0xc001910fc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019110a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019110c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.76,StartTime:2020-05-01 16:20:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:20:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://19264582fffd00ffe33289b8bf15c0fd6435ef3a75daf183584d0b0055d33880}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.387: INFO: Pod "nginx-deployment-85ddf47c5d-b9tx9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b9tx9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-b9tx9,UID:af9bd02c-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203732,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc001911297 0xc001911298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001911310} {node.kubernetes.io/unreachable Exists NoExecute 0xc001911330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.387: INFO: Pod "nginx-deployment-85ddf47c5d-bjc8v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bjc8v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-bjc8v,UID:af9bc07f-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203728,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc0019113a7 0xc0019113a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001911ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001911f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.387: INFO: Pod "nginx-deployment-85ddf47c5d-f5hrv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f5hrv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-f5hrv,UID:a5f4ed29-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203618,Generation:0,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc001911ff7 0xc001911ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000e8f270} {node.kubernetes.io/unreachable Exists NoExecute 0xc000e8f290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.80,StartTime:2020-05-01 16:20:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:20:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://688963597ad9d1862bf2c1c1ad445e604c4a509257a8cc5599f5f2c23cbce5cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.387: INFO: Pod "nginx-deployment-85ddf47c5d-ff5dz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ff5dz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-ff5dz,UID:af976b81-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203714,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc000e8f357 0xc000e8f358}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000e8f410} {node.kubernetes.io/unreachable Exists NoExecute 0xc000e8f520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.387: INFO: Pod "nginx-deployment-85ddf47c5d-gbtmk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gbtmk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-gbtmk,UID:a5f4fe21-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203600,Generation:0,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc000e8f597 0xc000e8f598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000e8f610} {node.kubernetes.io/unreachable Exists NoExecute 0xc000e8f740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.85,StartTime:2020-05-01 16:20:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:20:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c5a8020a583b8f2f867e933c6465eb8f719d73dbc69869cae0191d8f41e0f7c6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.388: INFO: Pod "nginx-deployment-85ddf47c5d-h2l4b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h2l4b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-h2l4b,UID:a5f2db2a-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203584,Generation:0,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc000e8f887 0xc000e8f888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000e8fab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000e8fad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.83,StartTime:2020-05-01 16:20:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:20:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8f3ac68d9a4bb349f155041b0d7a32451a24c92da7646e2131ec919662ed38ef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.388: INFO: Pod "nginx-deployment-85ddf47c5d-lg2m5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lg2m5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-lg2m5,UID:af9bc947-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203730,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc000e8fd57 0xc000e8fd58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000e8fed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000e8fef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.388: INFO: Pod "nginx-deployment-85ddf47c5d-mlh74" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mlh74,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-mlh74,UID:a5f37798-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203604,Generation:0,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc000e8ff67 0xc000e8ff68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00232e050} {node.kubernetes.io/unreachable Exists NoExecute 0xc00232e0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.78,StartTime:2020-05-01 16:20:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:20:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6ad3b787becc0baaef8ae0ffa9cc8f91dc64f3591755da5c6b6fdcb1dc3f5e82}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.388: INFO: Pod "nginx-deployment-85ddf47c5d-mr4qt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mr4qt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-mr4qt,UID:af976f65-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203710,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc00232e267 0xc00232e268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00232e2e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00232e300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.388: INFO: Pod "nginx-deployment-85ddf47c5d-mvrgs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mvrgs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-mvrgs,UID:a5f4f807-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203575,Generation:0,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc00232e3e7 0xc00232e3e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00232e460} {node.kubernetes.io/unreachable Exists NoExecute 0xc00232e480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.84,StartTime:2020-05-01 16:20:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:20:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bb6e5a2db1547a9cbde63d7fed8890881d3c4259d66a70942e971269ddb01c8a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.388: INFO: Pod "nginx-deployment-85ddf47c5d-sbbpd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sbbpd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-sbbpd,UID:a5fa0ebc-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203610,Generation:0,CreationTimestamp:2020-05-01 16:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc00232e547 0xc00232e548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00232e700} {node.kubernetes.io/unreachable Exists NoExecute 0xc00232e7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.77,StartTime:2020-05-01 16:20:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:20:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://382c7f20432ebb275ad0e571651e1b0a86250a4d1cd09f66b34ddaa4df895056}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.388: INFO: Pod "nginx-deployment-85ddf47c5d-sk6dp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sk6dp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-sk6dp,UID:af975fa0-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203709,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc00232eb17 0xc00232eb18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00232ed40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00232f270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.388: INFO: Pod "nginx-deployment-85ddf47c5d-vx6tz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vx6tz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-vx6tz,UID:af96a436-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203752,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc00232f4a7 0xc00232f4a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00232f8d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00232f8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 16:20:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.389: INFO: Pod "nginx-deployment-85ddf47c5d-xk8b9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xk8b9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-xk8b9,UID:af977074-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203715,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc00232fdd7 0xc00232fdd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00232fee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00232ff00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.389: INFO: Pod "nginx-deployment-85ddf47c5d-z4lhr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z4lhr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-z4lhr,UID:af9bcb0e-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203733,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc0023760d7 0xc0023760d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002376150} {node.kubernetes.io/unreachable Exists NoExecute 0xc002376220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.389: INFO: Pod "nginx-deployment-85ddf47c5d-z7mcx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z7mcx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-z7mcx,UID:af9bd6ad-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203731,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc002376297 0xc002376298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023763b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023763d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:20:35.389: INFO: Pod "nginx-deployment-85ddf47c5d-zqvss" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zqvss,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dpjsd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dpjsd/pods/nginx-deployment-85ddf47c5d-zqvss,UID:af960d1c-8bc7-11ea-99e8-0242ac110002,ResourceVersion:8203737,Generation:0,CreationTimestamp:2020-05-01 16:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a5e82772-8bc7-11ea-99e8-0242ac110002 0xc0023764f7 0xc0023764f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n64s9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n64s9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n64s9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002376600} {node.kubernetes.io/unreachable Exists NoExecute 0xc002376630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:20:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 16:20:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:20:35.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-dpjsd" for this suite. May 1 16:21:01.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:21:01.749: INFO: namespace: e2e-tests-deployment-dpjsd, resource: bindings, ignored listing per whitelist May 1 16:21:01.749: INFO: namespace e2e-tests-deployment-dpjsd deletion completed in 26.213948906s • [SLOW TEST:43.209 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:21:01.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 1 16:21:33.503: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:33.503: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:33.536451 6 log.go:172] (0xc0009254a0) (0xc001232320) Create stream I0501 16:21:33.536486 6 log.go:172] (0xc0009254a0) (0xc001232320) Stream added, broadcasting: 1 I0501 16:21:33.539149 6 log.go:172] (0xc0009254a0) Reply frame received for 1 I0501 16:21:33.539197 6 log.go:172] (0xc0009254a0) (0xc0023174a0) Create stream I0501 16:21:33.539219 6 log.go:172] (0xc0009254a0) (0xc0023174a0) Stream added, broadcasting: 3 I0501 16:21:33.540309 6 log.go:172] (0xc0009254a0) Reply frame received for 3 I0501 16:21:33.540379 6 log.go:172] (0xc0009254a0) (0xc0023175e0) Create stream I0501 16:21:33.540401 6 log.go:172] (0xc0009254a0) (0xc0023175e0) Stream added, broadcasting: 5 I0501 16:21:33.541814 6 log.go:172] (0xc0009254a0) Reply frame received for 5 I0501 16:21:33.623324 6 log.go:172] (0xc0009254a0) Data frame received for 3 I0501 16:21:33.623347 6 log.go:172] (0xc0023174a0) (3) Data frame handling I0501 16:21:33.623367 6 log.go:172] (0xc0023174a0) (3) Data frame sent I0501 16:21:33.623374 6 log.go:172] (0xc0009254a0) Data frame received for 3 I0501 16:21:33.623387 6 log.go:172] (0xc0023174a0) (3) Data frame handling I0501 16:21:33.623723 6 log.go:172] (0xc0009254a0) Data frame received for 5 I0501 16:21:33.623745 6 log.go:172] (0xc0023175e0) (5) Data frame handling I0501 16:21:33.625754 6 log.go:172] (0xc0009254a0) Data frame received for 1 I0501 16:21:33.625791 6 log.go:172] (0xc001232320) (1) Data frame handling I0501 16:21:33.625816 6 log.go:172] (0xc001232320) (1) Data frame sent I0501 16:21:33.625842 6 log.go:172] (0xc0009254a0) (0xc001232320) Stream removed, broadcasting: 1 I0501 16:21:33.625882 6 log.go:172] (0xc0009254a0) Go away received I0501 16:21:33.625967 6 log.go:172] (0xc0009254a0) (0xc001232320) Stream removed, broadcasting: 1 I0501 16:21:33.625991 6 log.go:172] (0xc0009254a0) (0xc0023174a0) Stream removed, broadcasting: 3 I0501 16:21:33.626012 6 log.go:172] (0xc0009254a0) (0xc0023175e0) Stream removed, broadcasting: 5 May 1 16:21:33.626: INFO: Exec stderr: "" May 1 16:21:33.626: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:33.626: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:33.772767 6 log.go:172] (0xc00237a2c0) (0xc002317860) Create stream I0501 16:21:33.772795 6 log.go:172] (0xc00237a2c0) (0xc002317860) Stream added, broadcasting: 1 I0501 16:21:33.774694 6 log.go:172] (0xc00237a2c0) Reply frame received for 1 I0501 16:21:33.774736 6 log.go:172] (0xc00237a2c0) (0xc001b44960) Create stream I0501 16:21:33.774746 6 log.go:172] (0xc00237a2c0) (0xc001b44960) Stream added, broadcasting: 3 I0501 16:21:33.775454 6 log.go:172] (0xc00237a2c0) Reply frame received for 3 I0501 16:21:33.775484 6 log.go:172] (0xc00237a2c0) (0xc001c81040) Create stream I0501 16:21:33.775492 6 log.go:172] (0xc00237a2c0) (0xc001c81040) Stream added, broadcasting: 5 I0501 16:21:33.776062 6 log.go:172] (0xc00237a2c0) Reply frame received for 5 I0501 16:21:33.848925 6 log.go:172] (0xc00237a2c0) Data frame received for 5 I0501 16:21:33.848949 6 log.go:172] (0xc001c81040) (5) Data frame handling I0501 16:21:33.848977 6 log.go:172] (0xc00237a2c0) Data frame received for 3 I0501 16:21:33.849003 6 log.go:172] (0xc001b44960) (3) Data frame handling I0501 16:21:33.849017 6 log.go:172] (0xc001b44960) (3) Data frame sent I0501 16:21:33.849032 6 log.go:172] (0xc00237a2c0) Data frame received for 3 I0501 16:21:33.849046 6 log.go:172] (0xc001b44960) (3) Data frame handling I0501 16:21:33.850417 6 log.go:172] (0xc00237a2c0) Data frame received for 1 I0501 16:21:33.850446 6 log.go:172] (0xc002317860) (1) Data frame handling I0501 16:21:33.850462 6 log.go:172] (0xc002317860) (1) Data frame sent I0501 16:21:33.850479 6 log.go:172] (0xc00237a2c0) (0xc002317860) Stream removed, broadcasting: 1 I0501 16:21:33.850529 6 log.go:172] (0xc00237a2c0) Go away received I0501 16:21:33.850573 6 log.go:172] (0xc00237a2c0) (0xc002317860) Stream removed, broadcasting: 1 I0501 16:21:33.850596 6 log.go:172] (0xc00237a2c0) (0xc001b44960) Stream removed, broadcasting: 3 I0501 16:21:33.850610 6 log.go:172] (0xc00237a2c0) (0xc001c81040) Stream removed, broadcasting: 5 May 1 16:21:33.850: INFO: Exec stderr: "" May 1 16:21:33.850: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:33.850: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:33.877570 6 log.go:172] (0xc001e9c2c0) (0xc001b44dc0) Create stream I0501 16:21:33.877592 6 log.go:172] (0xc001e9c2c0) (0xc001b44dc0) Stream added, broadcasting: 1 I0501 16:21:33.880394 6 log.go:172] (0xc001e9c2c0) Reply frame received for 1 I0501 16:21:33.880449 6 log.go:172] (0xc001e9c2c0) (0xc0025534a0) Create stream I0501 16:21:33.880466 6 log.go:172] (0xc001e9c2c0) (0xc0025534a0) Stream added, broadcasting: 3 I0501 16:21:33.881775 6 log.go:172] (0xc001e9c2c0) Reply frame received for 3 I0501 16:21:33.881818 6 log.go:172] (0xc001e9c2c0) (0xc001b44f00) Create stream I0501 16:21:33.881830 6 log.go:172] (0xc001e9c2c0) (0xc001b44f00) Stream added, broadcasting: 5 I0501 16:21:33.882833 6 log.go:172] (0xc001e9c2c0) Reply frame received for 5 I0501 16:21:33.947946 6 log.go:172] (0xc001e9c2c0) Data frame received for 5 I0501 16:21:33.947992 6 log.go:172] (0xc001b44f00) (5) Data frame handling I0501 16:21:33.948019 6 log.go:172] (0xc001e9c2c0) Data frame received for 3 I0501 16:21:33.948028 6 log.go:172] (0xc0025534a0) (3) Data frame handling I0501 16:21:33.948039 6 log.go:172] (0xc0025534a0) (3) Data frame sent I0501 16:21:33.948048 6 log.go:172] (0xc001e9c2c0) Data frame received for 3 I0501 16:21:33.948056 6 log.go:172] (0xc0025534a0) (3) Data frame handling I0501 16:21:33.949556 6 log.go:172] (0xc001e9c2c0) Data frame received for 1 I0501 16:21:33.949591 6 log.go:172] (0xc001b44dc0) (1) Data frame handling I0501 16:21:33.949616 6 log.go:172] (0xc001b44dc0) (1) Data frame sent I0501 16:21:33.949631 6 log.go:172] (0xc001e9c2c0) (0xc001b44dc0) Stream removed, broadcasting: 1 I0501 16:21:33.949647 6 log.go:172] (0xc001e9c2c0) Go away received I0501 16:21:33.949768 6 log.go:172] (0xc001e9c2c0) (0xc001b44dc0) Stream removed, broadcasting: 1 I0501 16:21:33.949787 6 log.go:172] (0xc001e9c2c0) (0xc0025534a0) Stream removed, broadcasting: 3 I0501 16:21:33.949795 6 log.go:172] (0xc001e9c2c0) (0xc001b44f00) Stream removed, broadcasting: 5 May 1 16:21:33.949: INFO: Exec stderr: "" May 1 16:21:33.949: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:33.949: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:33.978489 6 log.go:172] (0xc00237a790) (0xc002317ae0) Create stream I0501 16:21:33.978510 6 log.go:172] (0xc00237a790) (0xc002317ae0) Stream added, broadcasting: 1 I0501 16:21:33.980250 6 log.go:172] (0xc00237a790) Reply frame received for 1 I0501 16:21:33.980293 6 log.go:172] (0xc00237a790) (0xc002317b80) Create stream I0501 16:21:33.980303 6 log.go:172] (0xc00237a790) (0xc002317b80) Stream added, broadcasting: 3 I0501 16:21:33.981491 6 log.go:172] (0xc00237a790) Reply frame received for 3 I0501 16:21:33.981516 6 log.go:172] (0xc00237a790) (0xc001b45040) Create stream I0501 16:21:33.981525 6 log.go:172] (0xc00237a790) (0xc001b45040) Stream added, broadcasting: 5 I0501 16:21:33.982260 6 log.go:172] (0xc00237a790) Reply frame received for 5 I0501 16:21:34.043011 6 log.go:172] (0xc00237a790) Data frame received for 3 I0501 16:21:34.043035 6 log.go:172] (0xc002317b80) (3) Data frame handling I0501 16:21:34.043043 6 log.go:172] (0xc002317b80) (3) Data frame sent I0501 16:21:34.043047 6 log.go:172] (0xc00237a790) Data frame received for 3 I0501 16:21:34.043051 6 log.go:172] (0xc002317b80) (3) Data frame handling I0501 16:21:34.043078 6 log.go:172] (0xc00237a790) Data frame received for 5 I0501 16:21:34.043114 6 log.go:172] (0xc001b45040) (5) Data frame handling I0501 16:21:34.044249 6 log.go:172] (0xc00237a790) Data frame received for 1 I0501 16:21:34.044264 6 log.go:172] (0xc002317ae0) (1) Data frame handling I0501 16:21:34.044272 6 log.go:172] (0xc002317ae0) (1) Data frame sent I0501 16:21:34.044489 6 log.go:172] (0xc00237a790) (0xc002317ae0) Stream removed, broadcasting: 1 I0501 16:21:34.044546 6 log.go:172] (0xc00237a790) Go away received I0501 16:21:34.044606 6 log.go:172] (0xc00237a790) (0xc002317ae0) Stream removed, broadcasting: 1 I0501 16:21:34.044627 6 log.go:172] (0xc00237a790) (0xc002317b80) Stream removed, broadcasting: 3 I0501 16:21:34.044639 6 log.go:172] (0xc00237a790) (0xc001b45040) Stream removed, broadcasting: 5 May 1 16:21:34.044: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 1 16:21:34.044: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:34.044: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:34.076502 6 log.go:172] (0xc00237ac60) (0xc002317e00) Create stream I0501 16:21:34.076528 6 log.go:172] (0xc00237ac60) (0xc002317e00) Stream added, broadcasting: 1 I0501 16:21:34.078756 6 log.go:172] (0xc00237ac60) Reply frame received for 1 I0501 16:21:34.078793 6 log.go:172] (0xc00237ac60) (0xc002317ea0) Create stream I0501 16:21:34.078806 6 log.go:172] (0xc00237ac60) (0xc002317ea0) Stream added, broadcasting: 3 I0501 16:21:34.079510 6 log.go:172] (0xc00237ac60) Reply frame received for 3 I0501 16:21:34.079534 6 log.go:172] (0xc00237ac60) (0xc002553540) Create stream I0501 16:21:34.079546 6 log.go:172] (0xc00237ac60) (0xc002553540) Stream added, broadcasting: 5 I0501 16:21:34.080316 6 log.go:172] (0xc00237ac60) Reply frame received for 5 I0501 16:21:34.143155 6 log.go:172] (0xc00237ac60) Data frame received for 5 I0501 16:21:34.143178 6 log.go:172] (0xc002553540) (5) Data frame handling I0501 16:21:34.143194 6 log.go:172] (0xc00237ac60) Data frame received for 3 I0501 16:21:34.143207 6 log.go:172] (0xc002317ea0) (3) Data frame handling I0501 16:21:34.143214 6 log.go:172] (0xc002317ea0) (3) Data frame sent I0501 16:21:34.143220 6 log.go:172] (0xc00237ac60) Data frame received for 3 I0501 16:21:34.143228 6 log.go:172] (0xc002317ea0) (3) Data frame handling I0501 16:21:34.144147 6 log.go:172] (0xc00237ac60) Data frame received for 1 I0501 16:21:34.144158 6 log.go:172] (0xc002317e00) (1) Data frame handling I0501 16:21:34.144165 6 log.go:172] (0xc002317e00) (1) Data frame sent I0501 16:21:34.144185 6 log.go:172] (0xc00237ac60) (0xc002317e00) Stream removed, broadcasting: 1 I0501 16:21:34.144220 6 log.go:172] (0xc00237ac60) Go away received I0501 16:21:34.144313 6 log.go:172] (0xc00237ac60) (0xc002317e00) Stream removed, broadcasting: 1 I0501 16:21:34.144359 6 log.go:172] (0xc00237ac60) (0xc002317ea0) Stream removed, broadcasting: 3 I0501 16:21:34.144388 6 log.go:172] (0xc00237ac60) (0xc002553540) Stream removed, broadcasting: 5 May 1 16:21:34.144: INFO: Exec stderr: "" May 1 16:21:34.144: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:34.144: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:34.169427 6 log.go:172] (0xc00174e2c0) (0xc002553860) Create stream I0501 16:21:34.169468 6 log.go:172] (0xc00174e2c0) (0xc002553860) Stream added, broadcasting: 1 I0501 16:21:34.175976 6 log.go:172] (0xc00174e2c0) Reply frame received for 1 I0501 16:21:34.176026 6 log.go:172] (0xc00174e2c0) (0xc001232140) Create stream I0501 16:21:34.176047 6 log.go:172] (0xc00174e2c0) (0xc001232140) Stream added, broadcasting: 3 I0501 16:21:34.177094 6 log.go:172] (0xc00174e2c0) Reply frame received for 3 I0501 16:21:34.177301 6 log.go:172] (0xc00174e2c0) (0xc0012321e0) Create stream I0501 16:21:34.177329 6 log.go:172] (0xc00174e2c0) (0xc0012321e0) Stream added, broadcasting: 5 I0501 16:21:34.178188 6 log.go:172] (0xc00174e2c0) Reply frame received for 5 I0501 16:21:34.232556 6 log.go:172] (0xc00174e2c0) Data frame received for 5 I0501 16:21:34.232586 6 log.go:172] (0xc0012321e0) (5) Data frame handling I0501 16:21:34.232649 6 log.go:172] (0xc00174e2c0) Data frame received for 3 I0501 16:21:34.232724 6 log.go:172] (0xc001232140) (3) Data frame handling I0501 16:21:34.232755 6 log.go:172] (0xc001232140) (3) Data frame sent I0501 16:21:34.232774 6 log.go:172] (0xc00174e2c0) Data frame received for 3 I0501 16:21:34.232783 6 log.go:172] (0xc001232140) (3) Data frame handling I0501 16:21:34.234008 6 log.go:172] (0xc00174e2c0) Data frame received for 1 I0501 16:21:34.234023 6 log.go:172] (0xc002553860) (1) Data frame handling I0501 16:21:34.234040 6 log.go:172] (0xc002553860) (1) Data frame sent I0501 16:21:34.234065 6 log.go:172] (0xc00174e2c0) (0xc002553860) Stream removed, broadcasting: 1 I0501 16:21:34.234149 6 log.go:172] (0xc00174e2c0) (0xc002553860) Stream removed, broadcasting: 1 I0501 16:21:34.234164 6 log.go:172] (0xc00174e2c0) (0xc001232140) Stream removed, broadcasting: 3 I0501 16:21:34.234180 6 log.go:172] (0xc00174e2c0) Go away received I0501 16:21:34.234320 6 log.go:172] (0xc00174e2c0) (0xc0012321e0) Stream removed, broadcasting: 5 May 1 16:21:34.234: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 1 16:21:34.234: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:34.234: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:34.383396 6 log.go:172] (0xc00045def0) (0xc002316320) Create stream I0501 16:21:34.383499 6 log.go:172] (0xc00045def0) (0xc002316320) Stream added, broadcasting: 1 I0501 16:21:34.385899 6 log.go:172] (0xc00045def0) Reply frame received for 1 I0501 16:21:34.385947 6 log.go:172] (0xc00045def0) (0xc0027ba000) Create stream I0501 16:21:34.385957 6 log.go:172] (0xc00045def0) (0xc0027ba000) Stream added, broadcasting: 3 I0501 16:21:34.387211 6 log.go:172] (0xc00045def0) Reply frame received for 3 I0501 16:21:34.387250 6 log.go:172] (0xc00045def0) (0xc001232320) Create stream I0501 16:21:34.387263 6 log.go:172] (0xc00045def0) (0xc001232320) Stream added, broadcasting: 5 I0501 16:21:34.388121 6 log.go:172] (0xc00045def0) Reply frame received for 5 I0501 16:21:34.435400 6 log.go:172] (0xc00045def0) Data frame received for 5 I0501 16:21:34.435435 6 log.go:172] (0xc00045def0) Data frame received for 3 I0501 16:21:34.435459 6 log.go:172] (0xc0027ba000) (3) Data frame handling I0501 16:21:34.435477 6 log.go:172] (0xc0027ba000) (3) Data frame sent I0501 16:21:34.435484 6 log.go:172] (0xc00045def0) Data frame received for 3 I0501 16:21:34.435489 6 log.go:172] (0xc0027ba000) (3) Data frame handling I0501 16:21:34.435523 6 log.go:172] (0xc001232320) (5) Data frame handling I0501 16:21:34.436363 6 log.go:172] (0xc00045def0) Data frame received for 1 I0501 16:21:34.436380 6 log.go:172] (0xc002316320) (1) Data frame handling I0501 16:21:34.436389 6 log.go:172] (0xc002316320) (1) Data frame sent I0501 16:21:34.436399 6 log.go:172] (0xc00045def0) (0xc002316320) Stream removed, broadcasting: 1 I0501 16:21:34.436415 6 log.go:172] (0xc00045def0) Go away received I0501 16:21:34.436534 6 log.go:172] (0xc00045def0) (0xc002316320) Stream removed, broadcasting: 1 I0501 16:21:34.436553 6 log.go:172] (0xc00045def0) (0xc0027ba000) Stream removed, broadcasting: 3 I0501 16:21:34.436563 6 log.go:172] (0xc00045def0) (0xc001232320) Stream removed, broadcasting: 5 May 1 16:21:34.436: INFO: Exec stderr: "" May 1 16:21:34.436: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:34.436: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:34.509949 6 log.go:172] (0xc0016984d0) (0xc0027ba3c0) Create stream I0501 16:21:34.509978 6 log.go:172] (0xc0016984d0) (0xc0027ba3c0) Stream added, broadcasting: 1 I0501 16:21:34.511732 6 log.go:172] (0xc0016984d0) Reply frame received for 1 I0501 16:21:34.511773 6 log.go:172] (0xc0016984d0) (0xc0028ac000) Create stream I0501 16:21:34.511783 6 log.go:172] (0xc0016984d0) (0xc0028ac000) Stream added, broadcasting: 3 I0501 16:21:34.512755 6 log.go:172] (0xc0016984d0) Reply frame received for 3 I0501 16:21:34.512805 6 log.go:172] (0xc0016984d0) (0xc0027ba500) Create stream I0501 16:21:34.512822 6 log.go:172] (0xc0016984d0) (0xc0027ba500) Stream added, broadcasting: 5 I0501 16:21:34.513939 6 log.go:172] (0xc0016984d0) Reply frame received for 5 I0501 16:21:34.580679 6 log.go:172] (0xc0016984d0) Data frame received for 3 I0501 16:21:34.580707 6 log.go:172] (0xc0028ac000) (3) Data frame handling I0501 16:21:34.580723 6 log.go:172] (0xc0028ac000) (3) Data frame sent I0501 16:21:34.580731 6 log.go:172] (0xc0016984d0) Data frame received for 3 I0501 16:21:34.580741 6 log.go:172] (0xc0028ac000) (3) Data frame handling I0501 16:21:34.580843 6 log.go:172] (0xc0016984d0) Data frame received for 5 I0501 16:21:34.580876 6 log.go:172] (0xc0027ba500) (5) Data frame handling I0501 16:21:34.582317 6 log.go:172] (0xc0016984d0) Data frame received for 1 I0501 16:21:34.582334 6 log.go:172] (0xc0027ba3c0) (1) Data frame handling I0501 16:21:34.582340 6 log.go:172] (0xc0027ba3c0) (1) Data frame sent I0501 16:21:34.582347 6 log.go:172] (0xc0016984d0) (0xc0027ba3c0) Stream removed, broadcasting: 1 I0501 16:21:34.582354 6 log.go:172] (0xc0016984d0) Go away received I0501 16:21:34.582543 6 log.go:172] (0xc0016984d0) (0xc0027ba3c0) Stream removed, broadcasting: 1 I0501 16:21:34.582588 6 log.go:172] (0xc0016984d0) (0xc0028ac000) Stream removed, broadcasting: 3 I0501 16:21:34.582611 6 log.go:172] (0xc0016984d0) (0xc0027ba500) Stream removed, broadcasting: 5 May 1 16:21:34.582: INFO: Exec stderr: "" May 1 16:21:34.582: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:34.582: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:34.608759 6 log.go:172] (0xc00174e580) (0xc001232820) Create stream I0501 16:21:34.608866 6 log.go:172] (0xc00174e580) (0xc001232820) Stream added, broadcasting: 1 I0501 16:21:34.610644 6 log.go:172] (0xc00174e580) Reply frame received for 1 I0501 16:21:34.610693 6 log.go:172] (0xc00174e580) (0xc0027ba5a0) Create stream I0501 16:21:34.610713 6 log.go:172] (0xc00174e580) (0xc0027ba5a0) Stream added, broadcasting: 3 I0501 16:21:34.611628 6 log.go:172] (0xc00174e580) Reply frame received for 3 I0501 16:21:34.611664 6 log.go:172] (0xc00174e580) (0xc0028ac0a0) Create stream I0501 16:21:34.611677 6 log.go:172] (0xc00174e580) (0xc0028ac0a0) Stream added, broadcasting: 5 I0501 16:21:34.612521 6 log.go:172] (0xc00174e580) Reply frame received for 5 I0501 16:21:34.660560 6 log.go:172] (0xc00174e580) Data frame received for 3 I0501 16:21:34.660590 6 log.go:172] (0xc0027ba5a0) (3) Data frame handling I0501 16:21:34.660598 6 log.go:172] (0xc0027ba5a0) (3) Data frame sent I0501 16:21:34.660608 6 log.go:172] (0xc00174e580) Data frame received for 3 I0501 16:21:34.660615 6 log.go:172] (0xc0027ba5a0) (3) Data frame handling I0501 16:21:34.660633 6 log.go:172] (0xc00174e580) Data frame received for 5 I0501 16:21:34.660640 6 log.go:172] (0xc0028ac0a0) (5) Data frame handling I0501 16:21:34.662538 6 log.go:172] (0xc00174e580) Data frame received for 1 I0501 16:21:34.662646 6 log.go:172] (0xc001232820) (1) Data frame handling I0501 16:21:34.662664 6 log.go:172] (0xc001232820) (1) Data frame sent I0501 16:21:34.662684 6 log.go:172] (0xc00174e580) (0xc001232820) Stream removed, broadcasting: 1 I0501 16:21:34.662705 6 log.go:172] (0xc00174e580) Go away received I0501 16:21:34.662838 6 log.go:172] (0xc00174e580) (0xc001232820) Stream removed, broadcasting: 1 I0501 16:21:34.662874 6 log.go:172] (0xc00174e580) (0xc0027ba5a0) Stream removed, broadcasting: 3 I0501 16:21:34.662884 6 log.go:172] (0xc00174e580) (0xc0028ac0a0) Stream removed, broadcasting: 5 May 1 16:21:34.662: INFO: Exec stderr: "" May 1 16:21:34.662: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fwq4z PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:21:34.662: INFO: >>> kubeConfig: /root/.kube/config I0501 16:21:34.692122 6 log.go:172] (0xc000925970) (0xc002316640) Create stream I0501 16:21:34.692162 6 log.go:172] (0xc000925970) (0xc002316640) Stream added, broadcasting: 1 I0501 16:21:34.693905 6 log.go:172] (0xc000925970) Reply frame received for 1 I0501 16:21:34.693952 6 log.go:172] (0xc000925970) (0xc0023166e0) Create stream I0501 16:21:34.693965 6 log.go:172] (0xc000925970) (0xc0023166e0) Stream added, broadcasting: 3 I0501 16:21:34.694766 6 log.go:172] (0xc000925970) Reply frame received for 3 I0501 16:21:34.694805 6 log.go:172] (0xc000925970) (0xc0028ac140) Create stream I0501 16:21:34.694823 6 log.go:172] (0xc000925970) (0xc0028ac140) Stream added, broadcasting: 5 I0501 16:21:34.695678 6 log.go:172] (0xc000925970) Reply frame received for 5 I0501 16:21:34.748303 6 log.go:172] (0xc000925970) Data frame received for 5 I0501 16:21:34.748358 6 log.go:172] (0xc0028ac140) (5) Data frame handling I0501 16:21:34.748386 6 log.go:172] (0xc000925970) Data frame received for 3 I0501 16:21:34.748397 6 log.go:172] (0xc0023166e0) (3) Data frame handling I0501 16:21:34.748416 6 log.go:172] (0xc0023166e0) (3) Data frame sent I0501 16:21:34.748427 6 log.go:172] (0xc000925970) Data frame received for 3 I0501 16:21:34.748450 6 log.go:172] (0xc0023166e0) (3) Data frame handling I0501 16:21:34.749710 6 log.go:172] (0xc000925970) Data frame received for 1 I0501 16:21:34.749751 6 log.go:172] (0xc002316640) (1) Data frame handling I0501 16:21:34.749776 6 log.go:172] (0xc002316640) (1) Data frame sent I0501 16:21:34.749793 6 log.go:172] (0xc000925970) (0xc002316640) Stream removed, broadcasting: 1 I0501 16:21:34.749829 6 log.go:172] (0xc000925970) Go away received I0501 16:21:34.749896 6 log.go:172] (0xc000925970) (0xc002316640) Stream removed, broadcasting: 1 I0501 16:21:34.749921 6 log.go:172] (0xc000925970) (0xc0023166e0) Stream removed, broadcasting: 3 I0501 16:21:34.749930 6 log.go:172] (0xc000925970) (0xc0028ac140) Stream removed, broadcasting: 5 May 1 16:21:34.749: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:21:34.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-fwq4z" for this suite. May 1 16:22:23.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:22:23.613: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-fwq4z, resource: bindings, ignored listing per whitelist May 1 16:22:23.623: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-fwq4z deletion completed in 48.569905804s • [SLOW TEST:81.874 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:22:23.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 1 16:22:24.136: INFO: Waiting up to 5m0s for pod "pod-f0ab9d62-8bc7-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-2926t" to be "success or failure" May 1 16:22:24.278: INFO: Pod "pod-f0ab9d62-8bc7-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 141.612758ms May 1 16:22:26.889: INFO: Pod "pod-f0ab9d62-8bc7-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752766958s May 1 16:22:28.894: INFO: Pod "pod-f0ab9d62-8bc7-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75742874s May 1 16:22:30.897: INFO: Pod "pod-f0ab9d62-8bc7-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.760773223s STEP: Saw pod success May 1 16:22:30.897: INFO: Pod "pod-f0ab9d62-8bc7-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:22:30.899: INFO: Trying to get logs from node hunter-worker pod pod-f0ab9d62-8bc7-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:22:30.966: INFO: Waiting for pod pod-f0ab9d62-8bc7-11ea-acf7-0242ac110017 to disappear May 1 16:22:31.060: INFO: Pod pod-f0ab9d62-8bc7-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:22:31.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2926t" for this suite. May 1 16:22:39.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:22:39.131: INFO: namespace: e2e-tests-emptydir-2926t, resource: bindings, ignored listing per whitelist May 1 16:22:39.176: INFO: namespace e2e-tests-emptydir-2926t deletion completed in 8.113579391s • [SLOW TEST:15.553 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:22:39.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-f9e80002-8bc7-11ea-acf7-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-f9e8009c-8bc7-11ea-acf7-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f9e80002-8bc7-11ea-acf7-0242ac110017 STEP: Updating configmap cm-test-opt-upd-f9e8009c-8bc7-11ea-acf7-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-f9e800eb-8bc7-11ea-acf7-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:22:52.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v5w6c" for this suite. May 1 16:23:16.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:23:16.381: INFO: namespace: e2e-tests-projected-v5w6c, resource: bindings, ignored listing per whitelist May 1 16:23:16.429: INFO: namespace e2e-tests-projected-v5w6c deletion completed in 24.153113719s • [SLOW TEST:37.252 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:23:16.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 1 16:23:17.382: INFO: Waiting up to 5m0s for pod "client-containers-10556fb6-8bc8-11ea-acf7-0242ac110017" in namespace "e2e-tests-containers-g4628" to be "success or failure" May 1 16:23:17.443: INFO: Pod "client-containers-10556fb6-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 60.657673ms May 1 16:23:19.447: INFO: Pod "client-containers-10556fb6-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064962815s May 1 16:23:21.500: INFO: Pod "client-containers-10556fb6-8bc8-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11768444s STEP: Saw pod success May 1 16:23:21.500: INFO: Pod "client-containers-10556fb6-8bc8-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:23:21.503: INFO: Trying to get logs from node hunter-worker2 pod client-containers-10556fb6-8bc8-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:23:21.548: INFO: Waiting for pod client-containers-10556fb6-8bc8-11ea-acf7-0242ac110017 to disappear May 1 16:23:21.593: INFO: Pod client-containers-10556fb6-8bc8-11ea-acf7-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:23:21.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-g4628" for this suite. May 1 16:23:28.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:23:28.048: INFO: namespace: e2e-tests-containers-g4628, resource: bindings, ignored listing per whitelist May 1 16:23:28.110: INFO: namespace e2e-tests-containers-g4628 deletion completed in 6.511713876s • [SLOW TEST:11.681 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:23:28.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-16e6e9fa-8bc8-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 16:23:28.326: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-16ef16da-8bc8-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-8h8qb" to be "success or failure" May 1 16:23:28.411: INFO: Pod "pod-projected-configmaps-16ef16da-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 84.407199ms May 1 16:23:30.415: INFO: Pod "pod-projected-configmaps-16ef16da-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088900788s May 1 16:23:32.519: INFO: Pod "pod-projected-configmaps-16ef16da-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192750277s May 1 16:23:34.525: INFO: Pod "pod-projected-configmaps-16ef16da-8bc8-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.198855598s STEP: Saw pod success May 1 16:23:34.525: INFO: Pod "pod-projected-configmaps-16ef16da-8bc8-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:23:34.528: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-16ef16da-8bc8-11ea-acf7-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 16:23:34.700: INFO: Waiting for pod pod-projected-configmaps-16ef16da-8bc8-11ea-acf7-0242ac110017 to disappear May 1 16:23:34.720: INFO: Pod pod-projected-configmaps-16ef16da-8bc8-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:23:34.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8h8qb" for this suite. May 1 16:23:40.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:23:41.045: INFO: namespace: e2e-tests-projected-8h8qb, resource: bindings, ignored listing per whitelist May 1 16:23:41.053: INFO: namespace e2e-tests-projected-8h8qb deletion completed in 6.329199997s • [SLOW TEST:12.943 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:23:41.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:23:41.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-crwv6" for this suite. May 1 16:24:03.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:24:03.281: INFO: namespace: e2e-tests-kubelet-test-crwv6, resource: bindings, ignored listing per whitelist May 1 16:24:03.340: INFO: namespace e2e-tests-kubelet-test-crwv6 deletion completed in 22.084136082s • [SLOW TEST:22.286 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:24:03.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 16:24:03.810: INFO: Waiting up to 5m0s for pod "pod-2c1b962f-8bc8-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-n8x7n" to be "success or failure" May 1 16:24:03.853: INFO: Pod "pod-2c1b962f-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 43.216458ms May 1 16:24:06.067: INFO: Pod "pod-2c1b962f-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257354325s May 1 16:24:08.070: INFO: Pod "pod-2c1b962f-8bc8-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.260604429s STEP: Saw pod success May 1 16:24:08.070: INFO: Pod "pod-2c1b962f-8bc8-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:24:08.073: INFO: Trying to get logs from node hunter-worker2 pod pod-2c1b962f-8bc8-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:24:08.098: INFO: Waiting for pod pod-2c1b962f-8bc8-11ea-acf7-0242ac110017 to disappear May 1 16:24:08.178: INFO: Pod pod-2c1b962f-8bc8-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:24:08.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n8x7n" for this suite. May 1 16:24:16.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:24:16.240: INFO: namespace: e2e-tests-emptydir-n8x7n, resource: bindings, ignored listing per whitelist May 1 16:24:16.264: INFO: namespace e2e-tests-emptydir-n8x7n deletion completed in 8.082309617s • [SLOW TEST:12.925 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:24:16.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 16:24:16.778: INFO: Waiting up to 5m0s for pod "pod-33c2ca05-8bc8-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-s6l87" to be "success or failure" May 1 16:24:16.848: INFO: Pod "pod-33c2ca05-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 69.142902ms May 1 16:24:19.100: INFO: Pod "pod-33c2ca05-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321374598s May 1 16:24:21.330: INFO: Pod "pod-33c2ca05-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551864728s May 1 16:24:23.335: INFO: Pod "pod-33c2ca05-8bc8-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.556295636s May 1 16:24:25.538: INFO: Pod "pod-33c2ca05-8bc8-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.759367808s STEP: Saw pod success May 1 16:24:25.538: INFO: Pod "pod-33c2ca05-8bc8-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:24:25.541: INFO: Trying to get logs from node hunter-worker pod pod-33c2ca05-8bc8-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:24:25.691: INFO: Waiting for pod pod-33c2ca05-8bc8-11ea-acf7-0242ac110017 to disappear May 1 16:24:25.720: INFO: Pod pod-33c2ca05-8bc8-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:24:25.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-s6l87" for this suite. May 1 16:24:31.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:24:32.139: INFO: namespace: e2e-tests-emptydir-s6l87, resource: bindings, ignored listing per whitelist May 1 16:24:32.176: INFO: namespace e2e-tests-emptydir-s6l87 deletion completed in 6.451162855s • [SLOW TEST:15.912 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:24:32.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-3d8772ee-8bc8-11ea-acf7-0242ac110017 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:24:45.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6bkc9" for this suite. May 1 16:25:09.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:25:09.541: INFO: namespace: e2e-tests-configmap-6bkc9, resource: bindings, ignored listing per whitelist May 1 16:25:09.589: INFO: namespace e2e-tests-configmap-6bkc9 deletion completed in 24.105276469s • [SLOW TEST:37.413 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:25:09.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 1 16:25:09.754: INFO: Waiting up to 5m0s for pod "pod-5367f8da-8bc8-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-qcl6v" to be "success or failure" May 1 16:25:09.775: INFO: Pod "pod-5367f8da-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.761555ms May 1 16:25:11.779: INFO: Pod "pod-5367f8da-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025113471s May 1 16:25:13.782: INFO: Pod "pod-5367f8da-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028848694s May 1 16:25:16.106: INFO: Pod "pod-5367f8da-8bc8-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.352146796s STEP: Saw pod success May 1 16:25:16.106: INFO: Pod "pod-5367f8da-8bc8-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:25:16.364: INFO: Trying to get logs from node hunter-worker pod pod-5367f8da-8bc8-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:25:17.736: INFO: Waiting for pod pod-5367f8da-8bc8-11ea-acf7-0242ac110017 to disappear May 1 16:25:18.064: INFO: Pod pod-5367f8da-8bc8-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:25:18.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qcl6v" for this suite. May 1 16:25:26.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:25:27.057: INFO: namespace: e2e-tests-emptydir-qcl6v, resource: bindings, ignored listing per whitelist May 1 16:25:27.076: INFO: namespace e2e-tests-emptydir-qcl6v deletion completed in 9.009167644s • [SLOW TEST:17.487 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:25:27.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-5e6994f0-8bc8-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 16:25:28.515: INFO: Waiting up to 5m0s for pod "pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017" in namespace "e2e-tests-secrets-jhlpg" to be "success or failure" May 1 16:25:28.886: INFO: Pod "pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 370.873274ms May 1 16:25:30.890: INFO: Pod "pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374948438s May 1 16:25:32.969: INFO: Pod "pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454496663s May 1 16:25:34.973: INFO: Pod "pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.458264179s May 1 16:25:36.976: INFO: Pod "pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.461453109s STEP: Saw pod success May 1 16:25:36.976: INFO: Pod "pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:25:36.979: INFO: Trying to get logs from node hunter-worker pod pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 16:25:37.132: INFO: Waiting for pod pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017 to disappear May 1 16:25:37.154: INFO: Pod pod-secrets-5e99e0bd-8bc8-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:25:37.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jhlpg" for this suite. May 1 16:25:43.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:25:43.280: INFO: namespace: e2e-tests-secrets-jhlpg, resource: bindings, ignored listing per whitelist May 1 16:25:43.335: INFO: namespace e2e-tests-secrets-jhlpg deletion completed in 6.126284098s • [SLOW TEST:16.258 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:25:43.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:25:43.425: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:25:48.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hbrxt" for this suite. May 1 16:26:32.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:26:32.380: INFO: namespace: e2e-tests-pods-hbrxt, resource: bindings, ignored listing per whitelist May 1 16:26:32.398: INFO: namespace e2e-tests-pods-hbrxt deletion completed in 44.291962326s • [SLOW TEST:49.064 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:26:32.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 1 16:26:34.132: INFO: created pod pod-service-account-defaultsa May 1 16:26:34.132: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 1 16:26:34.335: INFO: created pod pod-service-account-mountsa May 1 16:26:34.335: INFO: pod pod-service-account-mountsa service account token volume mount: true May 1 16:26:34.395: INFO: created pod pod-service-account-nomountsa May 1 16:26:34.395: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 1 16:26:34.579: INFO: created pod pod-service-account-defaultsa-mountspec May 1 16:26:34.579: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 1 16:26:34.624: INFO: created pod pod-service-account-mountsa-mountspec May 1 16:26:34.624: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 1 16:26:35.084: INFO: created pod pod-service-account-nomountsa-mountspec May 1 16:26:35.084: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 1 16:26:35.522: INFO: created pod pod-service-account-defaultsa-nomountspec May 1 16:26:35.522: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 1 16:26:36.097: INFO: created pod pod-service-account-mountsa-nomountspec May 1 16:26:36.097: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 1 16:26:36.105: INFO: created pod pod-service-account-nomountsa-nomountspec May 1 16:26:36.105: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:26:36.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-zp8mb" for this suite. May 1 16:27:09.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:27:09.336: INFO: namespace: e2e-tests-svcaccounts-zp8mb, resource: bindings, ignored listing per whitelist May 1 16:27:09.386: INFO: namespace e2e-tests-svcaccounts-zp8mb deletion completed in 32.752332352s • [SLOW TEST:36.987 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:27:09.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 16:27:10.228: INFO: Waiting up to 5m0s for pod "downward-api-9b2fca88-8bc8-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-hgt27" to be "success or failure" May 1 16:27:10.426: INFO: Pod "downward-api-9b2fca88-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 198.033272ms May 1 16:27:12.734: INFO: Pod "downward-api-9b2fca88-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.505248257s May 1 16:27:14.738: INFO: Pod "downward-api-9b2fca88-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.509787809s May 1 16:27:16.907: INFO: Pod "downward-api-9b2fca88-8bc8-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.678252837s STEP: Saw pod success May 1 16:27:16.907: INFO: Pod "downward-api-9b2fca88-8bc8-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:27:16.911: INFO: Trying to get logs from node hunter-worker pod downward-api-9b2fca88-8bc8-11ea-acf7-0242ac110017 container dapi-container: STEP: delete the pod May 1 16:27:16.984: INFO: Waiting for pod downward-api-9b2fca88-8bc8-11ea-acf7-0242ac110017 to disappear May 1 16:27:17.001: INFO: Pod downward-api-9b2fca88-8bc8-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:27:17.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hgt27" for this suite. May 1 16:27:23.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:27:23.578: INFO: namespace: e2e-tests-downward-api-hgt27, resource: bindings, ignored listing per whitelist May 1 16:27:23.627: INFO: namespace e2e-tests-downward-api-hgt27 deletion completed in 6.623524264s • [SLOW TEST:14.240 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:27:23.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 1 16:27:24.169: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 16:27:24.199: INFO: Waiting for terminating namespaces to be deleted... May 1 16:27:24.202: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 1 16:27:24.207: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 1 16:27:24.207: INFO: Container kube-proxy ready: true, restart count 0 May 1 16:27:24.207: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 16:27:24.207: INFO: Container kindnet-cni ready: true, restart count 0 May 1 16:27:24.207: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 16:27:24.207: INFO: Container coredns ready: true, restart count 0 May 1 16:27:24.207: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 1 16:27:24.212: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 16:27:24.212: INFO: Container kindnet-cni ready: true, restart count 0 May 1 16:27:24.212: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 16:27:24.212: INFO: Container coredns ready: true, restart count 0 May 1 16:27:24.212: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 16:27:24.212: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a5fe252b-8bc8-11ea-acf7-0242ac110017 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-a5fe252b-8bc8-11ea-acf7-0242ac110017 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a5fe252b-8bc8-11ea-acf7-0242ac110017 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:27:32.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-sqbqg" for this suite. May 1 16:27:57.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:27:57.763: INFO: namespace: e2e-tests-sched-pred-sqbqg, resource: bindings, ignored listing per whitelist May 1 16:27:57.787: INFO: namespace e2e-tests-sched-pred-sqbqg deletion completed in 25.136403817s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:34.160 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:27:57.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 16:27:58.579: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7d8b1c0-8bc8-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-4xv7q" to be "success or failure" May 1 16:27:58.707: INFO: Pod "downwardapi-volume-b7d8b1c0-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 127.915081ms May 1 16:28:00.712: INFO: Pod "downwardapi-volume-b7d8b1c0-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132045205s May 1 16:28:02.716: INFO: Pod "downwardapi-volume-b7d8b1c0-8bc8-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136207003s May 1 16:28:04.719: INFO: Pod "downwardapi-volume-b7d8b1c0-8bc8-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139556536s STEP: Saw pod success May 1 16:28:04.719: INFO: Pod "downwardapi-volume-b7d8b1c0-8bc8-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:28:04.722: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b7d8b1c0-8bc8-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 16:28:04.984: INFO: Waiting for pod downwardapi-volume-b7d8b1c0-8bc8-11ea-acf7-0242ac110017 to disappear May 1 16:28:05.074: INFO: Pod downwardapi-volume-b7d8b1c0-8bc8-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:28:05.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4xv7q" for this suite. May 1 16:28:11.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:28:11.201: INFO: namespace: e2e-tests-downward-api-4xv7q, resource: bindings, ignored listing per whitelist May 1 16:28:11.212: INFO: namespace e2e-tests-downward-api-4xv7q deletion completed in 6.133798087s • [SLOW TEST:13.425 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:28:11.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-c56q7 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-c56q7;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-c56q7 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-c56q7;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-c56q7.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-c56q7.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-c56q7.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-c56q7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-c56q7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-c56q7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-c56q7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-c56q7.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-c56q7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 19.53.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.53.19_udp@PTR;check="$$(dig +tcp +noall +answer +search 19.53.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.53.19_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-c56q7 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-c56q7;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-c56q7 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-c56q7;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-c56q7.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-c56q7.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-c56q7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-c56q7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-c56q7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-c56q7.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-c56q7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 19.53.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.53.19_udp@PTR;check="$$(dig +tcp +noall +answer +search 19.53.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.53.19_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 16:28:25.681: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.759: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.777: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.780: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.782: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.785: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.794: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.798: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.802: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.803: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:25.814: INFO: Lookups using e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-c56q7 jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc] May 1 16:28:30.818: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:30.842: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:31.090: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:31.094: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:31.096: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:31.099: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:31.102: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:31.104: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:31.106: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:31.108: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:31.122: INFO: Lookups using e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-c56q7 jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc] May 1 16:28:35.930: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:35.951: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:36.253: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:36.255: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:36.259: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:36.262: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:36.517: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:36.521: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:36.524: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:36.527: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:36.548: INFO: Lookups using e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-c56q7 jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc] May 1 16:28:40.835: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:40.855: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:41.225: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:41.227: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:41.229: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:41.231: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:41.233: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:41.235: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:41.237: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:41.239: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:41.252: INFO: Lookups using e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-c56q7 jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc] May 1 16:28:45.861: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.884: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.946: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.949: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.951: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.954: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.956: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.959: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.962: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.964: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:45.987: INFO: Lookups using e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-c56q7 jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc] May 1 16:28:50.990: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.275: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.291: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.293: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.295: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.298: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.301: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.304: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.306: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.309: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc from pod e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017: the server could not find the requested resource (get pods dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017) May 1 16:28:51.586: INFO: Lookups using e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-c56q7 jessie_tcp@dns-test-service.e2e-tests-dns-c56q7 jessie_udp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@dns-test-service.e2e-tests-dns-c56q7.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-c56q7.svc] May 1 16:28:57.022: INFO: DNS probes using e2e-tests-dns-c56q7/dns-test-bfafd8ae-8bc8-11ea-acf7-0242ac110017 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:28:58.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-c56q7" for this suite. May 1 16:29:06.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:29:06.973: INFO: namespace: e2e-tests-dns-c56q7, resource: bindings, ignored listing per whitelist May 1 16:29:07.015: INFO: namespace e2e-tests-dns-c56q7 deletion completed in 8.250063685s • [SLOW TEST:55.803 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:29:07.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-gpxzr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gpxzr to expose endpoints map[] May 1 16:29:07.361: INFO: Get endpoints failed (84.383084ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 1 16:29:08.365: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gpxzr exposes endpoints map[] (1.088387072s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-gpxzr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gpxzr to expose endpoints map[pod1:[80]] May 1 16:29:14.046: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.675040289s elapsed, will retry) May 1 16:29:17.470: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gpxzr exposes endpoints map[pod1:[80]] (9.098893944s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-gpxzr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gpxzr to expose endpoints map[pod1:[80] pod2:[80]] May 1 16:29:23.278: INFO: Unexpected endpoints: found map[e1a5ece5-8bc8-11ea-99e8-0242ac110002:[80]], expected map[pod1:[80] pod2:[80]] (5.80624255s elapsed, will retry) May 1 16:29:24.287: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gpxzr exposes endpoints map[pod1:[80] pod2:[80]] (6.814743573s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-gpxzr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gpxzr to expose endpoints map[pod2:[80]] May 1 16:29:25.579: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gpxzr exposes endpoints map[pod2:[80]] (1.288492441s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-gpxzr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gpxzr to expose endpoints map[] May 1 16:29:26.650: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gpxzr exposes endpoints map[] (1.066837863s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:29:26.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-gpxzr" for this suite. May 1 16:29:35.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:29:35.375: INFO: namespace: e2e-tests-services-gpxzr, resource: bindings, ignored listing per whitelist May 1 16:29:35.411: INFO: namespace e2e-tests-services-gpxzr deletion completed in 8.25210075s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:28.395 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:29:35.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:29:43.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-fw25q" for this suite. May 1 16:29:53.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:29:54.013: INFO: namespace: e2e-tests-kubelet-test-fw25q, resource: bindings, ignored listing per whitelist May 1 16:29:54.052: INFO: namespace e2e-tests-kubelet-test-fw25q deletion completed in 10.204876503s • [SLOW TEST:18.641 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:29:54.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-h2bpr STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 16:29:55.304: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 16:30:28.171: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.117:8080/dial?request=hostName&protocol=udp&host=10.244.2.116&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-h2bpr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:30:28.171: INFO: >>> kubeConfig: /root/.kube/config I0501 16:30:28.199338 6 log.go:172] (0xc00045def0) (0xc001c806e0) Create stream I0501 16:30:28.199400 6 log.go:172] (0xc00045def0) (0xc001c806e0) Stream added, broadcasting: 1 I0501 16:30:28.201025 6 log.go:172] (0xc00045def0) Reply frame received for 1 I0501 16:30:28.201053 6 log.go:172] (0xc00045def0) (0xc001c80780) Create stream I0501 16:30:28.201061 6 log.go:172] (0xc00045def0) (0xc001c80780) Stream added, broadcasting: 3 I0501 16:30:28.201904 6 log.go:172] (0xc00045def0) Reply frame received for 3 I0501 16:30:28.201953 6 log.go:172] (0xc00045def0) (0xc0022745a0) Create stream I0501 16:30:28.201966 6 log.go:172] (0xc00045def0) (0xc0022745a0) Stream added, broadcasting: 5 I0501 16:30:28.202623 6 log.go:172] (0xc00045def0) Reply frame received for 5 I0501 16:30:28.261711 6 log.go:172] (0xc00045def0) Data frame received for 3 I0501 16:30:28.261736 6 log.go:172] (0xc001c80780) (3) Data frame handling I0501 16:30:28.261752 6 log.go:172] (0xc001c80780) (3) Data frame sent I0501 16:30:28.262525 6 log.go:172] (0xc00045def0) Data frame received for 5 I0501 16:30:28.262547 6 log.go:172] (0xc0022745a0) (5) Data frame handling I0501 16:30:28.262828 6 log.go:172] (0xc00045def0) Data frame received for 3 I0501 16:30:28.262844 6 log.go:172] (0xc001c80780) (3) Data frame handling I0501 16:30:28.264113 6 log.go:172] (0xc00045def0) Data frame received for 1 I0501 16:30:28.264146 6 log.go:172] (0xc001c806e0) (1) Data frame handling I0501 16:30:28.264169 6 log.go:172] (0xc001c806e0) (1) Data frame sent I0501 16:30:28.264181 6 log.go:172] (0xc00045def0) (0xc001c806e0) Stream removed, broadcasting: 1 I0501 16:30:28.264194 6 log.go:172] (0xc00045def0) Go away received I0501 16:30:28.264328 6 log.go:172] (0xc00045def0) (0xc001c806e0) Stream removed, broadcasting: 1 I0501 16:30:28.264352 6 log.go:172] (0xc00045def0) (0xc001c80780) Stream removed, broadcasting: 3 I0501 16:30:28.264366 6 log.go:172] (0xc00045def0) (0xc0022745a0) Stream removed, broadcasting: 5 May 1 16:30:28.264: INFO: Waiting for endpoints: map[] May 1 16:30:28.267: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.117:8080/dial?request=hostName&protocol=udp&host=10.244.1.106&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-h2bpr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:30:28.267: INFO: >>> kubeConfig: /root/.kube/config I0501 16:30:28.292833 6 log.go:172] (0xc000c682c0) (0xc002274a00) Create stream I0501 16:30:28.292855 6 log.go:172] (0xc000c682c0) (0xc002274a00) Stream added, broadcasting: 1 I0501 16:30:28.294231 6 log.go:172] (0xc000c682c0) Reply frame received for 1 I0501 16:30:28.294253 6 log.go:172] (0xc000c682c0) (0xc001f2b540) Create stream I0501 16:30:28.294262 6 log.go:172] (0xc000c682c0) (0xc001f2b540) Stream added, broadcasting: 3 I0501 16:30:28.294952 6 log.go:172] (0xc000c682c0) Reply frame received for 3 I0501 16:30:28.294980 6 log.go:172] (0xc000c682c0) (0xc002274aa0) Create stream I0501 16:30:28.294989 6 log.go:172] (0xc000c682c0) (0xc002274aa0) Stream added, broadcasting: 5 I0501 16:30:28.295690 6 log.go:172] (0xc000c682c0) Reply frame received for 5 I0501 16:30:28.353933 6 log.go:172] (0xc000c682c0) Data frame received for 3 I0501 16:30:28.353952 6 log.go:172] (0xc001f2b540) (3) Data frame handling I0501 16:30:28.353963 6 log.go:172] (0xc001f2b540) (3) Data frame sent I0501 16:30:28.354687 6 log.go:172] (0xc000c682c0) Data frame received for 5 I0501 16:30:28.354715 6 log.go:172] (0xc002274aa0) (5) Data frame handling I0501 16:30:28.354810 6 log.go:172] (0xc000c682c0) Data frame received for 3 I0501 16:30:28.354823 6 log.go:172] (0xc001f2b540) (3) Data frame handling I0501 16:30:28.356100 6 log.go:172] (0xc000c682c0) Data frame received for 1 I0501 16:30:28.356111 6 log.go:172] (0xc002274a00) (1) Data frame handling I0501 16:30:28.356131 6 log.go:172] (0xc002274a00) (1) Data frame sent I0501 16:30:28.356149 6 log.go:172] (0xc000c682c0) (0xc002274a00) Stream removed, broadcasting: 1 I0501 16:30:28.356207 6 log.go:172] (0xc000c682c0) (0xc002274a00) Stream removed, broadcasting: 1 I0501 16:30:28.356224 6 log.go:172] (0xc000c682c0) (0xc001f2b540) Stream removed, broadcasting: 3 I0501 16:30:28.356381 6 log.go:172] (0xc000c682c0) (0xc002274aa0) Stream removed, broadcasting: 5 May 1 16:30:28.356: INFO: Waiting for endpoints: map[] I0501 16:30:28.356602 6 log.go:172] (0xc000c682c0) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:30:28.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-h2bpr" for this suite. May 1 16:30:54.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:30:54.496: INFO: namespace: e2e-tests-pod-network-test-h2bpr, resource: bindings, ignored listing per whitelist May 1 16:30:54.526: INFO: namespace e2e-tests-pod-network-test-h2bpr deletion completed in 26.079284315s • [SLOW TEST:60.473 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:30:54.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:31:01.586: INFO: Waiting up to 5m0s for pod "client-envvars-2516bd75-8bc9-11ea-acf7-0242ac110017" in namespace "e2e-tests-pods-wxlvn" to be "success or failure" May 1 16:31:01.740: INFO: Pod "client-envvars-2516bd75-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 154.055959ms May 1 16:31:03.743: INFO: Pod "client-envvars-2516bd75-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157645176s May 1 16:31:05.748: INFO: Pod "client-envvars-2516bd75-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161927696s May 1 16:31:07.751: INFO: Pod "client-envvars-2516bd75-8bc9-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.164934403s STEP: Saw pod success May 1 16:31:07.751: INFO: Pod "client-envvars-2516bd75-8bc9-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:31:07.753: INFO: Trying to get logs from node hunter-worker pod client-envvars-2516bd75-8bc9-11ea-acf7-0242ac110017 container env3cont: STEP: delete the pod May 1 16:31:07.809: INFO: Waiting for pod client-envvars-2516bd75-8bc9-11ea-acf7-0242ac110017 to disappear May 1 16:31:07.833: INFO: Pod client-envvars-2516bd75-8bc9-11ea-acf7-0242ac110017 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:31:07.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wxlvn" for this suite. May 1 16:31:50.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:31:50.268: INFO: namespace: e2e-tests-pods-wxlvn, resource: bindings, ignored listing per whitelist May 1 16:31:50.312: INFO: namespace e2e-tests-pods-wxlvn deletion completed in 42.475227124s • [SLOW TEST:55.785 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:31:50.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-427a1f17-8bc9-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 16:31:50.974: INFO: Waiting up to 5m0s for pod "pod-configmaps-427d20f0-8bc9-11ea-acf7-0242ac110017" in namespace "e2e-tests-configmap-n9htw" to be "success or failure" May 1 16:31:51.013: INFO: Pod "pod-configmaps-427d20f0-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 38.419923ms May 1 16:31:53.017: INFO: Pod "pod-configmaps-427d20f0-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042802624s May 1 16:31:55.022: INFO: Pod "pod-configmaps-427d20f0-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047386067s May 1 16:31:57.026: INFO: Pod "pod-configmaps-427d20f0-8bc9-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051115555s STEP: Saw pod success May 1 16:31:57.026: INFO: Pod "pod-configmaps-427d20f0-8bc9-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:31:57.028: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-427d20f0-8bc9-11ea-acf7-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 16:31:57.138: INFO: Waiting for pod pod-configmaps-427d20f0-8bc9-11ea-acf7-0242ac110017 to disappear May 1 16:31:57.207: INFO: Pod pod-configmaps-427d20f0-8bc9-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:31:57.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-n9htw" for this suite. May 1 16:32:03.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:32:03.277: INFO: namespace: e2e-tests-configmap-n9htw, resource: bindings, ignored listing per whitelist May 1 16:32:03.290: INFO: namespace e2e-tests-configmap-n9htw deletion completed in 6.079730583s • [SLOW TEST:12.978 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:32:03.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-4a15f6ed-8bc9-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 16:32:03.700: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017" in namespace "e2e-tests-configmap-sqx9s" to be "success or failure" May 1 16:32:03.746: INFO: Pod "pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 45.51198ms May 1 16:32:05.750: INFO: Pod "pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049515841s May 1 16:32:08.155: INFO: Pod "pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454392079s May 1 16:32:10.158: INFO: Pod "pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.457912787s May 1 16:32:12.161: INFO: Pod "pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.461086607s STEP: Saw pod success May 1 16:32:12.161: INFO: Pod "pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:32:12.163: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 16:32:12.221: INFO: Waiting for pod pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017 to disappear May 1 16:32:12.298: INFO: Pod pod-configmaps-4a1d8236-8bc9-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:32:12.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sqx9s" for this suite. May 1 16:32:20.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:32:20.506: INFO: namespace: e2e-tests-configmap-sqx9s, resource: bindings, ignored listing per whitelist May 1 16:32:20.639: INFO: namespace e2e-tests-configmap-sqx9s deletion completed in 8.338195516s • [SLOW TEST:17.349 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:32:20.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-cdzl STEP: Creating a pod to test atomic-volume-subpath May 1 16:32:21.023: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cdzl" in namespace "e2e-tests-subpath-6zkb6" to be "success or failure" May 1 16:32:21.136: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Pending", Reason="", readiness=false. Elapsed: 113.615591ms May 1 16:32:23.275: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252035108s May 1 16:32:25.278: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255398636s May 1 16:32:27.423: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400791415s May 1 16:32:29.427: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.404636627s May 1 16:32:31.431: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Running", Reason="", readiness=true. Elapsed: 10.408682992s May 1 16:32:33.434: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Running", Reason="", readiness=false. Elapsed: 12.411628471s May 1 16:32:35.598: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Running", Reason="", readiness=false. Elapsed: 14.575758814s May 1 16:32:37.602: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Running", Reason="", readiness=false. Elapsed: 16.579171225s May 1 16:32:39.606: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Running", Reason="", readiness=false. Elapsed: 18.582894224s May 1 16:32:41.608: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Running", Reason="", readiness=false. Elapsed: 20.585732836s May 1 16:32:43.612: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Running", Reason="", readiness=false. Elapsed: 22.589746636s May 1 16:32:45.694: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Running", Reason="", readiness=false. Elapsed: 24.671160653s May 1 16:32:47.698: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Running", Reason="", readiness=false. Elapsed: 26.67516091s May 1 16:32:49.701: INFO: Pod "pod-subpath-test-secret-cdzl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.678453646s STEP: Saw pod success May 1 16:32:49.701: INFO: Pod "pod-subpath-test-secret-cdzl" satisfied condition "success or failure" May 1 16:32:49.703: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-cdzl container test-container-subpath-secret-cdzl: STEP: delete the pod May 1 16:32:49.883: INFO: Waiting for pod pod-subpath-test-secret-cdzl to disappear May 1 16:32:50.033: INFO: Pod pod-subpath-test-secret-cdzl no longer exists STEP: Deleting pod pod-subpath-test-secret-cdzl May 1 16:32:50.033: INFO: Deleting pod "pod-subpath-test-secret-cdzl" in namespace "e2e-tests-subpath-6zkb6" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:32:50.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6zkb6" for this suite. May 1 16:32:56.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:32:56.114: INFO: namespace: e2e-tests-subpath-6zkb6, resource: bindings, ignored listing per whitelist May 1 16:32:56.148: INFO: namespace e2e-tests-subpath-6zkb6 deletion completed in 6.109243105s • [SLOW TEST:35.509 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:32:56.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-697d9e55-8bc9-11ea-acf7-0242ac110017 STEP: Creating secret with name s-test-opt-upd-697d9ec4-8bc9-11ea-acf7-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-697d9e55-8bc9-11ea-acf7-0242ac110017 STEP: Updating secret s-test-opt-upd-697d9ec4-8bc9-11ea-acf7-0242ac110017 STEP: Creating secret with name s-test-opt-create-697d9eeb-8bc9-11ea-acf7-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:34:23.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-75d74" for this suite. May 1 16:34:45.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:34:45.335: INFO: namespace: e2e-tests-secrets-75d74, resource: bindings, ignored listing per whitelist May 1 16:34:45.378: INFO: namespace e2e-tests-secrets-75d74 deletion completed in 22.095062394s • [SLOW TEST:109.230 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:34:45.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 1 16:34:45.530: INFO: Waiting up to 5m0s for pod "pod-aa97257a-8bc9-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-vc7hd" to be "success or failure" May 1 16:34:45.558: INFO: Pod "pod-aa97257a-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.401433ms May 1 16:34:47.563: INFO: Pod "pod-aa97257a-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032775455s May 1 16:34:49.567: INFO: Pod "pod-aa97257a-8bc9-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037083523s STEP: Saw pod success May 1 16:34:49.567: INFO: Pod "pod-aa97257a-8bc9-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:34:49.570: INFO: Trying to get logs from node hunter-worker2 pod pod-aa97257a-8bc9-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:34:49.590: INFO: Waiting for pod pod-aa97257a-8bc9-11ea-acf7-0242ac110017 to disappear May 1 16:34:49.594: INFO: Pod pod-aa97257a-8bc9-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:34:49.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vc7hd" for this suite. May 1 16:34:55.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:34:55.659: INFO: namespace: e2e-tests-emptydir-vc7hd, resource: bindings, ignored listing per whitelist May 1 16:34:55.683: INFO: namespace e2e-tests-emptydir-vc7hd deletion completed in 6.084556938s • [SLOW TEST:10.305 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:34:55.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 1 16:34:55.831: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2dn7v,SelfLink:/api/v1/namespaces/e2e-tests-watch-2dn7v/configmaps/e2e-watch-test-watch-closed,UID:b0ba15f2-8bc9-11ea-99e8-0242ac110002,ResourceVersion:8206560,Generation:0,CreationTimestamp:2020-05-01 16:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 16:34:55.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2dn7v,SelfLink:/api/v1/namespaces/e2e-tests-watch-2dn7v/configmaps/e2e-watch-test-watch-closed,UID:b0ba15f2-8bc9-11ea-99e8-0242ac110002,ResourceVersion:8206561,Generation:0,CreationTimestamp:2020-05-01 16:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 1 16:34:55.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2dn7v,SelfLink:/api/v1/namespaces/e2e-tests-watch-2dn7v/configmaps/e2e-watch-test-watch-closed,UID:b0ba15f2-8bc9-11ea-99e8-0242ac110002,ResourceVersion:8206562,Generation:0,CreationTimestamp:2020-05-01 16:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 16:34:55.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2dn7v,SelfLink:/api/v1/namespaces/e2e-tests-watch-2dn7v/configmaps/e2e-watch-test-watch-closed,UID:b0ba15f2-8bc9-11ea-99e8-0242ac110002,ResourceVersion:8206563,Generation:0,CreationTimestamp:2020-05-01 16:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:34:55.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-2dn7v" for this suite. May 1 16:35:01.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:35:02.236: INFO: namespace: e2e-tests-watch-2dn7v, resource: bindings, ignored listing per whitelist May 1 16:35:02.251: INFO: namespace e2e-tests-watch-2dn7v deletion completed in 6.35218755s • [SLOW TEST:6.568 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:35:02.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-rn7l2 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-rn7l2 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-rn7l2 May 1 16:35:02.382: INFO: Found 0 stateful pods, waiting for 1 May 1 16:35:12.388: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 1 16:35:12.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rn7l2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 16:35:12.661: INFO: stderr: "I0501 16:35:12.536396 2540 log.go:172] (0xc0006e2370) (0xc000123400) Create stream\nI0501 16:35:12.536457 2540 log.go:172] (0xc0006e2370) (0xc000123400) Stream added, broadcasting: 1\nI0501 16:35:12.538949 2540 log.go:172] (0xc0006e2370) Reply frame received for 1\nI0501 16:35:12.538990 2540 log.go:172] (0xc0006e2370) (0xc0000ce000) Create stream\nI0501 16:35:12.539008 2540 log.go:172] (0xc0006e2370) (0xc0000ce000) Stream added, broadcasting: 3\nI0501 16:35:12.539917 2540 log.go:172] (0xc0006e2370) Reply frame received for 3\nI0501 16:35:12.539956 2540 log.go:172] (0xc0006e2370) (0xc0001234a0) Create stream\nI0501 16:35:12.539977 2540 log.go:172] (0xc0006e2370) (0xc0001234a0) Stream added, broadcasting: 5\nI0501 16:35:12.540872 2540 log.go:172] (0xc0006e2370) Reply frame received for 5\nI0501 16:35:12.654099 2540 log.go:172] (0xc0006e2370) Data frame received for 3\nI0501 16:35:12.654134 2540 log.go:172] (0xc0000ce000) (3) Data frame handling\nI0501 16:35:12.654146 2540 log.go:172] (0xc0000ce000) (3) Data frame sent\nI0501 16:35:12.654275 2540 log.go:172] (0xc0006e2370) Data frame received for 5\nI0501 16:35:12.654406 2540 log.go:172] (0xc0001234a0) (5) Data frame handling\nI0501 16:35:12.654455 2540 log.go:172] (0xc0006e2370) Data frame received for 3\nI0501 16:35:12.654476 2540 log.go:172] (0xc0000ce000) (3) Data frame handling\nI0501 16:35:12.656696 2540 log.go:172] (0xc0006e2370) Data frame received for 1\nI0501 16:35:12.656719 2540 log.go:172] (0xc000123400) (1) Data frame handling\nI0501 16:35:12.656730 2540 log.go:172] (0xc000123400) (1) Data frame sent\nI0501 16:35:12.656743 2540 log.go:172] (0xc0006e2370) (0xc000123400) Stream removed, broadcasting: 1\nI0501 16:35:12.656952 2540 log.go:172] (0xc0006e2370) (0xc000123400) Stream removed, broadcasting: 1\nI0501 16:35:12.656987 2540 log.go:172] (0xc0006e2370) (0xc0000ce000) Stream removed, broadcasting: 3\nI0501 16:35:12.657005 2540 log.go:172] (0xc0006e2370) (0xc0001234a0) Stream removed, broadcasting: 5\nI0501 16:35:12.657035 2540 log.go:172] (0xc0006e2370) Go away received\n" May 1 16:35:12.661: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 16:35:12.661: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 16:35:12.665: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 1 16:35:22.670: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 16:35:22.670: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:35:22.864: INFO: POD NODE PHASE GRACE CONDITIONS May 1 16:35:22.864: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC }] May 1 16:35:22.864: INFO: May 1 16:35:22.864: INFO: StatefulSet ss has not reached scale 3, at 1 May 1 16:35:23.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.816084362s May 1 16:35:24.929: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.762000266s May 1 16:35:26.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.750667192s May 1 16:35:27.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.558031031s May 1 16:35:28.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964809286s May 1 16:35:29.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.814793288s May 1 16:35:30.880: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.8102534s May 1 16:35:32.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 799.713234ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-rn7l2 May 1 16:35:33.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rn7l2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 16:35:33.291: INFO: stderr: "I0501 16:35:33.219267 2563 log.go:172] (0xc000138840) (0xc000748640) Create stream\nI0501 16:35:33.219337 2563 log.go:172] (0xc000138840) (0xc000748640) Stream added, broadcasting: 1\nI0501 16:35:33.221775 2563 log.go:172] (0xc000138840) Reply frame received for 1\nI0501 16:35:33.221821 2563 log.go:172] (0xc000138840) (0xc0007b2d20) Create stream\nI0501 16:35:33.221832 2563 log.go:172] (0xc000138840) (0xc0007b2d20) Stream added, broadcasting: 3\nI0501 16:35:33.222583 2563 log.go:172] (0xc000138840) Reply frame received for 3\nI0501 16:35:33.222642 2563 log.go:172] (0xc000138840) (0xc000412000) Create stream\nI0501 16:35:33.222663 2563 log.go:172] (0xc000138840) (0xc000412000) Stream added, broadcasting: 5\nI0501 16:35:33.223516 2563 log.go:172] (0xc000138840) Reply frame received for 5\nI0501 16:35:33.285783 2563 log.go:172] (0xc000138840) Data frame received for 5\nI0501 16:35:33.285847 2563 log.go:172] (0xc000412000) (5) Data frame handling\nI0501 16:35:33.285887 2563 log.go:172] (0xc000138840) Data frame received for 3\nI0501 16:35:33.285923 2563 log.go:172] (0xc0007b2d20) (3) Data frame handling\nI0501 16:35:33.285955 2563 log.go:172] (0xc0007b2d20) (3) Data frame sent\nI0501 16:35:33.285978 2563 log.go:172] (0xc000138840) Data frame received for 3\nI0501 16:35:33.285995 2563 log.go:172] (0xc0007b2d20) (3) Data frame handling\nI0501 16:35:33.287474 2563 log.go:172] (0xc000138840) Data frame received for 1\nI0501 16:35:33.287505 2563 log.go:172] (0xc000748640) (1) Data frame handling\nI0501 16:35:33.287523 2563 log.go:172] (0xc000748640) (1) Data frame sent\nI0501 16:35:33.287549 2563 log.go:172] (0xc000138840) (0xc000748640) Stream removed, broadcasting: 1\nI0501 16:35:33.287581 2563 log.go:172] (0xc000138840) Go away received\nI0501 16:35:33.287857 2563 log.go:172] (0xc000138840) (0xc000748640) Stream removed, broadcasting: 1\nI0501 16:35:33.287883 2563 log.go:172] (0xc000138840) (0xc0007b2d20) Stream removed, broadcasting: 3\nI0501 16:35:33.287898 2563 log.go:172] (0xc000138840) (0xc000412000) Stream removed, broadcasting: 5\n" May 1 16:35:33.291: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 16:35:33.291: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 16:35:33.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rn7l2 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 16:35:33.490: INFO: stderr: "I0501 16:35:33.414117 2585 log.go:172] (0xc00014c580) (0xc0005914a0) Create stream\nI0501 16:35:33.414176 2585 log.go:172] (0xc00014c580) (0xc0005914a0) Stream added, broadcasting: 1\nI0501 16:35:33.416258 2585 log.go:172] (0xc00014c580) Reply frame received for 1\nI0501 16:35:33.416301 2585 log.go:172] (0xc00014c580) (0xc000591540) Create stream\nI0501 16:35:33.416318 2585 log.go:172] (0xc00014c580) (0xc000591540) Stream added, broadcasting: 3\nI0501 16:35:33.417353 2585 log.go:172] (0xc00014c580) Reply frame received for 3\nI0501 16:35:33.417391 2585 log.go:172] (0xc00014c580) (0xc0007cca00) Create stream\nI0501 16:35:33.417404 2585 log.go:172] (0xc00014c580) (0xc0007cca00) Stream added, broadcasting: 5\nI0501 16:35:33.418175 2585 log.go:172] (0xc00014c580) Reply frame received for 5\nI0501 16:35:33.485337 2585 log.go:172] (0xc00014c580) Data frame received for 3\nI0501 16:35:33.485540 2585 log.go:172] (0xc00014c580) Data frame received for 5\nI0501 16:35:33.485558 2585 log.go:172] (0xc0007cca00) (5) Data frame handling\nI0501 16:35:33.485565 2585 log.go:172] (0xc0007cca00) (5) Data frame sent\nI0501 16:35:33.485570 2585 log.go:172] (0xc00014c580) Data frame received for 5\nI0501 16:35:33.485574 2585 log.go:172] (0xc0007cca00) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0501 16:35:33.485592 2585 log.go:172] (0xc000591540) (3) Data frame handling\nI0501 16:35:33.485599 2585 log.go:172] (0xc000591540) (3) Data frame sent\nI0501 16:35:33.485605 2585 log.go:172] (0xc00014c580) Data frame received for 3\nI0501 16:35:33.485609 2585 log.go:172] (0xc000591540) (3) Data frame handling\nI0501 16:35:33.486910 2585 log.go:172] (0xc00014c580) Data frame received for 1\nI0501 16:35:33.486926 2585 log.go:172] (0xc0005914a0) (1) Data frame handling\nI0501 16:35:33.486938 2585 log.go:172] (0xc0005914a0) (1) Data frame sent\nI0501 16:35:33.486950 2585 log.go:172] (0xc00014c580) (0xc0005914a0) Stream removed, broadcasting: 1\nI0501 16:35:33.486966 2585 log.go:172] (0xc00014c580) Go away received\nI0501 16:35:33.487136 2585 log.go:172] (0xc00014c580) (0xc0005914a0) Stream removed, broadcasting: 1\nI0501 16:35:33.487153 2585 log.go:172] (0xc00014c580) (0xc000591540) Stream removed, broadcasting: 3\nI0501 16:35:33.487168 2585 log.go:172] (0xc00014c580) (0xc0007cca00) Stream removed, broadcasting: 5\n" May 1 16:35:33.490: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 16:35:33.490: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 16:35:33.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rn7l2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 16:35:33.709: INFO: stderr: "I0501 16:35:33.622600 2607 log.go:172] (0xc0008160b0) (0xc0006c2000) Create stream\nI0501 16:35:33.622671 2607 log.go:172] (0xc0008160b0) (0xc0006c2000) Stream added, broadcasting: 1\nI0501 16:35:33.630957 2607 log.go:172] (0xc0008160b0) Reply frame received for 1\nI0501 16:35:33.631028 2607 log.go:172] (0xc0008160b0) (0xc0000ecbe0) Create stream\nI0501 16:35:33.631044 2607 log.go:172] (0xc0008160b0) (0xc0000ecbe0) Stream added, broadcasting: 3\nI0501 16:35:33.633671 2607 log.go:172] (0xc0008160b0) Reply frame received for 3\nI0501 16:35:33.633776 2607 log.go:172] (0xc0008160b0) (0xc000820000) Create stream\nI0501 16:35:33.633832 2607 log.go:172] (0xc0008160b0) (0xc000820000) Stream added, broadcasting: 5\nI0501 16:35:33.635521 2607 log.go:172] (0xc0008160b0) Reply frame received for 5\nI0501 16:35:33.703061 2607 log.go:172] (0xc0008160b0) Data frame received for 5\nI0501 16:35:33.703095 2607 log.go:172] (0xc000820000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0501 16:35:33.703124 2607 log.go:172] (0xc0008160b0) Data frame received for 3\nI0501 16:35:33.703160 2607 log.go:172] (0xc0000ecbe0) (3) Data frame handling\nI0501 16:35:33.703179 2607 log.go:172] (0xc0000ecbe0) (3) Data frame sent\nI0501 16:35:33.703203 2607 log.go:172] (0xc0008160b0) Data frame received for 3\nI0501 16:35:33.703216 2607 log.go:172] (0xc0000ecbe0) (3) Data frame handling\nI0501 16:35:33.703248 2607 log.go:172] (0xc000820000) (5) Data frame sent\nI0501 16:35:33.703259 2607 log.go:172] (0xc0008160b0) Data frame received for 5\nI0501 16:35:33.703273 2607 log.go:172] (0xc000820000) (5) Data frame handling\nI0501 16:35:33.704843 2607 log.go:172] (0xc0008160b0) Data frame received for 1\nI0501 16:35:33.704866 2607 log.go:172] (0xc0006c2000) (1) Data frame handling\nI0501 16:35:33.704896 2607 log.go:172] (0xc0006c2000) (1) Data frame sent\nI0501 16:35:33.704974 2607 log.go:172] (0xc0008160b0) (0xc0006c2000) Stream removed, broadcasting: 1\nI0501 16:35:33.705014 2607 log.go:172] (0xc0008160b0) Go away received\nI0501 16:35:33.705347 2607 log.go:172] (0xc0008160b0) (0xc0006c2000) Stream removed, broadcasting: 1\nI0501 16:35:33.705362 2607 log.go:172] (0xc0008160b0) (0xc0000ecbe0) Stream removed, broadcasting: 3\nI0501 16:35:33.705368 2607 log.go:172] (0xc0008160b0) (0xc000820000) Stream removed, broadcasting: 5\n" May 1 16:35:33.709: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 16:35:33.709: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 16:35:33.714: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 1 16:35:43.718: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 1 16:35:43.718: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 1 16:35:43.718: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 1 16:35:43.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rn7l2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 16:35:43.897: INFO: stderr: "I0501 16:35:43.827950 2630 log.go:172] (0xc00076c160) (0xc0006e2000) Create stream\nI0501 16:35:43.827990 2630 log.go:172] (0xc00076c160) (0xc0006e2000) Stream added, broadcasting: 1\nI0501 16:35:43.830254 2630 log.go:172] (0xc00076c160) Reply frame received for 1\nI0501 16:35:43.830300 2630 log.go:172] (0xc00076c160) (0xc000304d20) Create stream\nI0501 16:35:43.830315 2630 log.go:172] (0xc00076c160) (0xc000304d20) Stream added, broadcasting: 3\nI0501 16:35:43.831494 2630 log.go:172] (0xc00076c160) Reply frame received for 3\nI0501 16:35:43.831515 2630 log.go:172] (0xc00076c160) (0xc000304e60) Create stream\nI0501 16:35:43.831525 2630 log.go:172] (0xc00076c160) (0xc000304e60) Stream added, broadcasting: 5\nI0501 16:35:43.832524 2630 log.go:172] (0xc00076c160) Reply frame received for 5\nI0501 16:35:43.890492 2630 log.go:172] (0xc00076c160) Data frame received for 5\nI0501 16:35:43.890524 2630 log.go:172] (0xc000304e60) (5) Data frame handling\nI0501 16:35:43.890557 2630 log.go:172] (0xc00076c160) Data frame received for 3\nI0501 16:35:43.890566 2630 log.go:172] (0xc000304d20) (3) Data frame handling\nI0501 16:35:43.890588 2630 log.go:172] (0xc000304d20) (3) Data frame sent\nI0501 16:35:43.890601 2630 log.go:172] (0xc00076c160) Data frame received for 3\nI0501 16:35:43.890612 2630 log.go:172] (0xc000304d20) (3) Data frame handling\nI0501 16:35:43.892098 2630 log.go:172] (0xc00076c160) Data frame received for 1\nI0501 16:35:43.892171 2630 log.go:172] (0xc0006e2000) (1) Data frame handling\nI0501 16:35:43.892196 2630 log.go:172] (0xc0006e2000) (1) Data frame sent\nI0501 16:35:43.892218 2630 log.go:172] (0xc00076c160) (0xc0006e2000) Stream removed, broadcasting: 1\nI0501 16:35:43.892306 2630 log.go:172] (0xc00076c160) Go away received\nI0501 16:35:43.892637 2630 log.go:172] (0xc00076c160) (0xc0006e2000) Stream removed, broadcasting: 1\nI0501 16:35:43.892673 2630 log.go:172] (0xc00076c160) (0xc000304d20) Stream removed, broadcasting: 3\nI0501 16:35:43.892693 2630 log.go:172] (0xc00076c160) (0xc000304e60) Stream removed, broadcasting: 5\n" May 1 16:35:43.897: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 16:35:43.897: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 16:35:43.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rn7l2 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 16:35:44.456: INFO: stderr: "I0501 16:35:44.328840 2651 log.go:172] (0xc0008622c0) (0xc00074c640) Create stream\nI0501 16:35:44.328949 2651 log.go:172] (0xc0008622c0) (0xc00074c640) Stream added, broadcasting: 1\nI0501 16:35:44.331942 2651 log.go:172] (0xc0008622c0) Reply frame received for 1\nI0501 16:35:44.331991 2651 log.go:172] (0xc0008622c0) (0xc0005debe0) Create stream\nI0501 16:35:44.332002 2651 log.go:172] (0xc0008622c0) (0xc0005debe0) Stream added, broadcasting: 3\nI0501 16:35:44.332937 2651 log.go:172] (0xc0008622c0) Reply frame received for 3\nI0501 16:35:44.333004 2651 log.go:172] (0xc0008622c0) (0xc0006ce000) Create stream\nI0501 16:35:44.333024 2651 log.go:172] (0xc0008622c0) (0xc0006ce000) Stream added, broadcasting: 5\nI0501 16:35:44.335458 2651 log.go:172] (0xc0008622c0) Reply frame received for 5\nI0501 16:35:44.450712 2651 log.go:172] (0xc0008622c0) Data frame received for 5\nI0501 16:35:44.450744 2651 log.go:172] (0xc0006ce000) (5) Data frame handling\nI0501 16:35:44.450769 2651 log.go:172] (0xc0008622c0) Data frame received for 3\nI0501 16:35:44.450788 2651 log.go:172] (0xc0005debe0) (3) Data frame handling\nI0501 16:35:44.450801 2651 log.go:172] (0xc0005debe0) (3) Data frame sent\nI0501 16:35:44.450809 2651 log.go:172] (0xc0008622c0) Data frame received for 3\nI0501 16:35:44.450815 2651 log.go:172] (0xc0005debe0) (3) Data frame handling\nI0501 16:35:44.452582 2651 log.go:172] (0xc0008622c0) Data frame received for 1\nI0501 16:35:44.452617 2651 log.go:172] (0xc00074c640) (1) Data frame handling\nI0501 16:35:44.452642 2651 log.go:172] (0xc00074c640) (1) Data frame sent\nI0501 16:35:44.452663 2651 log.go:172] (0xc0008622c0) (0xc00074c640) Stream removed, broadcasting: 1\nI0501 16:35:44.452689 2651 log.go:172] (0xc0008622c0) Go away received\nI0501 16:35:44.452904 2651 log.go:172] (0xc0008622c0) (0xc00074c640) Stream removed, broadcasting: 1\nI0501 16:35:44.452927 2651 log.go:172] (0xc0008622c0) (0xc0005debe0) Stream removed, broadcasting: 3\nI0501 16:35:44.452944 2651 log.go:172] (0xc0008622c0) (0xc0006ce000) Stream removed, broadcasting: 5\n" May 1 16:35:44.457: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 16:35:44.457: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 16:35:44.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rn7l2 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 16:35:44.964: INFO: stderr: "I0501 16:35:44.589049 2674 log.go:172] (0xc0006ca0b0) (0xc0002a7360) Create stream\nI0501 16:35:44.589332 2674 log.go:172] (0xc0006ca0b0) (0xc0002a7360) Stream added, broadcasting: 1\nI0501 16:35:44.591685 2674 log.go:172] (0xc0006ca0b0) Reply frame received for 1\nI0501 16:35:44.591726 2674 log.go:172] (0xc0006ca0b0) (0xc0002a7400) Create stream\nI0501 16:35:44.591737 2674 log.go:172] (0xc0006ca0b0) (0xc0002a7400) Stream added, broadcasting: 3\nI0501 16:35:44.592712 2674 log.go:172] (0xc0006ca0b0) Reply frame received for 3\nI0501 16:35:44.592748 2674 log.go:172] (0xc0006ca0b0) (0xc0002a74a0) Create stream\nI0501 16:35:44.592761 2674 log.go:172] (0xc0006ca0b0) (0xc0002a74a0) Stream added, broadcasting: 5\nI0501 16:35:44.593977 2674 log.go:172] (0xc0006ca0b0) Reply frame received for 5\nI0501 16:35:44.957644 2674 log.go:172] (0xc0006ca0b0) Data frame received for 3\nI0501 16:35:44.957680 2674 log.go:172] (0xc0002a7400) (3) Data frame handling\nI0501 16:35:44.957697 2674 log.go:172] (0xc0002a7400) (3) Data frame sent\nI0501 16:35:44.957708 2674 log.go:172] (0xc0006ca0b0) Data frame received for 3\nI0501 16:35:44.957714 2674 log.go:172] (0xc0002a7400) (3) Data frame handling\nI0501 16:35:44.957737 2674 log.go:172] (0xc0006ca0b0) Data frame received for 5\nI0501 16:35:44.957751 2674 log.go:172] (0xc0002a74a0) (5) Data frame handling\nI0501 16:35:44.959872 2674 log.go:172] (0xc0006ca0b0) Data frame received for 1\nI0501 16:35:44.959898 2674 log.go:172] (0xc0002a7360) (1) Data frame handling\nI0501 16:35:44.959915 2674 log.go:172] (0xc0002a7360) (1) Data frame sent\nI0501 16:35:44.959929 2674 log.go:172] (0xc0006ca0b0) (0xc0002a7360) Stream removed, broadcasting: 1\nI0501 16:35:44.959947 2674 log.go:172] (0xc0006ca0b0) Go away received\nI0501 16:35:44.960126 2674 log.go:172] (0xc0006ca0b0) (0xc0002a7360) Stream removed, broadcasting: 1\nI0501 16:35:44.960143 2674 log.go:172] (0xc0006ca0b0) (0xc0002a7400) Stream removed, broadcasting: 3\nI0501 16:35:44.960149 2674 log.go:172] (0xc0006ca0b0) (0xc0002a74a0) Stream removed, broadcasting: 5\n" May 1 16:35:44.964: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 16:35:44.964: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 16:35:44.964: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:35:45.074: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 1 16:35:55.082: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 16:35:55.082: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 1 16:35:55.082: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 1 16:35:55.114: INFO: POD NODE PHASE GRACE CONDITIONS May 1 16:35:55.114: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC }] May 1 16:35:55.114: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:55.114: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:55.114: INFO: May 1 16:35:55.114: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 16:35:56.212: INFO: POD NODE PHASE GRACE CONDITIONS May 1 16:35:56.212: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC }] May 1 16:35:56.212: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:56.212: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:56.212: INFO: May 1 16:35:56.212: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 16:35:57.290: INFO: POD NODE PHASE GRACE CONDITIONS May 1 16:35:57.290: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC }] May 1 16:35:57.290: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:57.291: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:57.291: INFO: May 1 16:35:57.291: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 16:35:58.295: INFO: POD NODE PHASE GRACE CONDITIONS May 1 16:35:58.295: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC }] May 1 16:35:58.295: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:58.295: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:58.295: INFO: May 1 16:35:58.295: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 16:35:59.300: INFO: POD NODE PHASE GRACE CONDITIONS May 1 16:35:59.300: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC }] May 1 16:35:59.300: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:59.300: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:35:59.300: INFO: May 1 16:35:59.300: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 16:36:00.306: INFO: POD NODE PHASE GRACE CONDITIONS May 1 16:36:00.306: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC }] May 1 16:36:00.306: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:36:00.306: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:36:00.306: INFO: May 1 16:36:00.306: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 16:36:01.661: INFO: POD NODE PHASE GRACE CONDITIONS May 1 16:36:01.662: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:02 +0000 UTC }] May 1 16:36:01.662: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:22 +0000 UTC }] May 1 16:36:01.662: INFO: May 1 16:36:01.662: INFO: StatefulSet ss has not reached scale 0, at 2 May 1 16:36:02.665: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.42552179s May 1 16:36:03.671: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.419744545s May 1 16:36:04.774: INFO: Verifying statefulset ss doesn't scale past 0 for another 416.154602ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-rn7l2 May 1 16:36:05.778: INFO: Scaling statefulset ss to 0 May 1 16:36:05.787: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 16:36:05.789: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rn7l2 May 1 16:36:05.791: INFO: Scaling statefulset ss to 0 May 1 16:36:05.799: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:36:05.801: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:36:06.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-rn7l2" for this suite. May 1 16:36:12.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:36:12.253: INFO: namespace: e2e-tests-statefulset-rn7l2, resource: bindings, ignored listing per whitelist May 1 16:36:12.322: INFO: namespace e2e-tests-statefulset-rn7l2 deletion completed in 6.297542163s • [SLOW TEST:70.070 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:36:12.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:36:41.178: INFO: Container started at 2020-05-01 16:36:17 +0000 UTC, pod became ready at 2020-05-01 16:36:39 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:36:41.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5m2sf" for this suite. May 1 16:37:05.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:37:05.276: INFO: namespace: e2e-tests-container-probe-5m2sf, resource: bindings, ignored listing per whitelist May 1 16:37:05.287: INFO: namespace e2e-tests-container-probe-5m2sf deletion completed in 24.104358331s • [SLOW TEST:52.965 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:37:05.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 16:37:05.484: INFO: Waiting up to 5m0s for pod "downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-zzk6s" to be "success or failure" May 1 16:37:05.490: INFO: Pod "downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.961348ms May 1 16:37:07.532: INFO: Pod "downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048189484s May 1 16:37:09.665: INFO: Pod "downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180622791s May 1 16:37:12.114: INFO: Pod "downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.629474286s May 1 16:37:14.118: INFO: Pod "downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.633866834s STEP: Saw pod success May 1 16:37:14.118: INFO: Pod "downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:37:14.121: INFO: Trying to get logs from node hunter-worker pod downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017 container dapi-container: STEP: delete the pod May 1 16:37:14.188: INFO: Waiting for pod downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017 to disappear May 1 16:37:14.248: INFO: Pod downward-api-fe060b18-8bc9-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:37:14.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zzk6s" for this suite. May 1 16:37:20.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:37:20.346: INFO: namespace: e2e-tests-downward-api-zzk6s, resource: bindings, ignored listing per whitelist May 1 16:37:20.362: INFO: namespace e2e-tests-downward-api-zzk6s deletion completed in 6.110126314s • [SLOW TEST:15.074 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:37:20.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 1 16:37:20.790: INFO: Waiting up to 5m0s for pod "client-containers-06fa82c1-8bca-11ea-acf7-0242ac110017" in namespace "e2e-tests-containers-jxfcr" to be "success or failure" May 1 16:37:20.866: INFO: Pod "client-containers-06fa82c1-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 76.373636ms May 1 16:37:22.897: INFO: Pod "client-containers-06fa82c1-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107582257s May 1 16:37:24.900: INFO: Pod "client-containers-06fa82c1-8bca-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.110390176s May 1 16:37:26.903: INFO: Pod "client-containers-06fa82c1-8bca-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113619668s STEP: Saw pod success May 1 16:37:26.903: INFO: Pod "client-containers-06fa82c1-8bca-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:37:26.906: INFO: Trying to get logs from node hunter-worker2 pod client-containers-06fa82c1-8bca-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:37:26.952: INFO: Waiting for pod client-containers-06fa82c1-8bca-11ea-acf7-0242ac110017 to disappear May 1 16:37:26.967: INFO: Pod client-containers-06fa82c1-8bca-11ea-acf7-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:37:26.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-jxfcr" for this suite. May 1 16:37:33.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:37:33.038: INFO: namespace: e2e-tests-containers-jxfcr, resource: bindings, ignored listing per whitelist May 1 16:37:33.088: INFO: namespace e2e-tests-containers-jxfcr deletion completed in 6.116401988s • [SLOW TEST:12.726 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:37:33.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 1 16:37:33.223: INFO: Waiting up to 5m0s for pod "var-expansion-0e8e86d0-8bca-11ea-acf7-0242ac110017" in namespace "e2e-tests-var-expansion-hn5nm" to be "success or failure" May 1 16:37:33.240: INFO: Pod "var-expansion-0e8e86d0-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.385576ms May 1 16:37:35.244: INFO: Pod "var-expansion-0e8e86d0-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020448587s May 1 16:37:37.248: INFO: Pod "var-expansion-0e8e86d0-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024526824s May 1 16:37:39.252: INFO: Pod "var-expansion-0e8e86d0-8bca-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028623928s STEP: Saw pod success May 1 16:37:39.252: INFO: Pod "var-expansion-0e8e86d0-8bca-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:37:39.254: INFO: Trying to get logs from node hunter-worker pod var-expansion-0e8e86d0-8bca-11ea-acf7-0242ac110017 container dapi-container: STEP: delete the pod May 1 16:37:39.300: INFO: Waiting for pod var-expansion-0e8e86d0-8bca-11ea-acf7-0242ac110017 to disappear May 1 16:37:39.324: INFO: Pod var-expansion-0e8e86d0-8bca-11ea-acf7-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:37:39.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-hn5nm" for this suite. May 1 16:37:45.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:37:45.459: INFO: namespace: e2e-tests-var-expansion-hn5nm, resource: bindings, ignored listing per whitelist May 1 16:37:45.472: INFO: namespace e2e-tests-var-expansion-hn5nm deletion completed in 6.145172373s • [SLOW TEST:12.384 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:37:45.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 1 16:37:46.268: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix880814381/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:37:46.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dz7jz" for this suite. May 1 16:37:52.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:37:52.960: INFO: namespace: e2e-tests-kubectl-dz7jz, resource: bindings, ignored listing per whitelist May 1 16:37:52.988: INFO: namespace e2e-tests-kubectl-dz7jz deletion completed in 6.331413488s • [SLOW TEST:7.515 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:37:52.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:37:53.070: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.484547ms) May 1 16:37:53.072: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.525224ms) May 1 16:37:53.074: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.313613ms) May 1 16:37:53.078: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.105151ms) May 1 16:37:53.080: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.704689ms) May 1 16:37:53.083: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.089384ms) May 1 16:37:53.112: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 28.264453ms) May 1 16:37:53.116: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.051208ms) May 1 16:37:53.119: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.440573ms) May 1 16:37:53.123: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.998512ms) May 1 16:37:53.127: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.425127ms) May 1 16:37:53.130: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.538068ms) May 1 16:37:53.134: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.803171ms) May 1 16:37:53.138: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.438034ms) May 1 16:37:53.141: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.357802ms) May 1 16:37:53.144: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.375276ms) May 1 16:37:53.148: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.672408ms) May 1 16:37:53.152: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.607326ms) May 1 16:37:53.156: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.814555ms) May 1 16:37:53.159: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.645978ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:37:53.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-m96bm" for this suite. May 1 16:37:59.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:37:59.285: INFO: namespace: e2e-tests-proxy-m96bm, resource: bindings, ignored listing per whitelist May 1 16:37:59.300: INFO: namespace e2e-tests-proxy-m96bm deletion completed in 6.135977542s • [SLOW TEST:6.312 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:37:59.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 1 16:38:12.031: INFO: 5 pods remaining May 1 16:38:12.031: INFO: 5 pods has nil DeletionTimestamp May 1 16:38:12.031: INFO: STEP: Gathering metrics W0501 16:38:16.077294 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:38:16.077: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:38:16.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4f7zf" for this suite. May 1 16:38:28.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:38:28.177: INFO: namespace: e2e-tests-gc-4f7zf, resource: bindings, ignored listing per whitelist May 1 16:38:28.197: INFO: namespace e2e-tests-gc-4f7zf deletion completed in 12.116474535s • [SLOW TEST:28.897 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:38:28.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-xzg5z/configmap-test-2fc039b5-8bca-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 16:38:28.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-2fc4642c-8bca-11ea-acf7-0242ac110017" in namespace "e2e-tests-configmap-xzg5z" to be "success or failure" May 1 16:38:29.339: INFO: Pod "pod-configmaps-2fc4642c-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 373.95751ms May 1 16:38:31.343: INFO: Pod "pod-configmaps-2fc4642c-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377933079s May 1 16:38:33.357: INFO: Pod "pod-configmaps-2fc4642c-8bca-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.391677314s STEP: Saw pod success May 1 16:38:33.357: INFO: Pod "pod-configmaps-2fc4642c-8bca-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:38:33.359: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2fc4642c-8bca-11ea-acf7-0242ac110017 container env-test: STEP: delete the pod May 1 16:38:33.617: INFO: Waiting for pod pod-configmaps-2fc4642c-8bca-11ea-acf7-0242ac110017 to disappear May 1 16:38:33.927: INFO: Pod pod-configmaps-2fc4642c-8bca-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:38:33.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xzg5z" for this suite. May 1 16:38:41.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:38:41.991: INFO: namespace: e2e-tests-configmap-xzg5z, resource: bindings, ignored listing per whitelist May 1 16:38:42.412: INFO: namespace e2e-tests-configmap-xzg5z deletion completed in 8.481166514s • [SLOW TEST:14.215 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:38:42.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:38:43.038: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 1 16:38:43.043: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gdd6s/daemonsets","resourceVersion":"8207495"},"items":null} May 1 16:38:43.046: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gdd6s/pods","resourceVersion":"8207495"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:38:43.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-gdd6s" for this suite. May 1 16:38:49.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:38:49.423: INFO: namespace: e2e-tests-daemonsets-gdd6s, resource: bindings, ignored listing per whitelist May 1 16:38:49.452: INFO: namespace e2e-tests-daemonsets-gdd6s deletion completed in 6.395480221s S [SKIPPING] [7.040 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:38:43.038: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:38:49.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-3c72de22-8bca-11ea-acf7-0242ac110017 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-3c72de22-8bca-11ea-acf7-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:38:58.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vzx8q" for this suite. May 1 16:39:20.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:39:20.431: INFO: namespace: e2e-tests-configmap-vzx8q, resource: bindings, ignored listing per whitelist May 1 16:39:20.478: INFO: namespace e2e-tests-configmap-vzx8q deletion completed in 22.108344942s • [SLOW TEST:31.026 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:39:20.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 1 16:39:26.870: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:39:52.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-nlwv5" for this suite. May 1 16:39:59.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:39:59.085: INFO: namespace: e2e-tests-namespaces-nlwv5, resource: bindings, ignored listing per whitelist May 1 16:39:59.283: INFO: namespace e2e-tests-namespaces-nlwv5 deletion completed in 6.290631646s STEP: Destroying namespace "e2e-tests-nsdeletetest-r5bp4" for this suite. May 1 16:39:59.285: INFO: Namespace e2e-tests-nsdeletetest-r5bp4 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-8xhf9" for this suite. May 1 16:40:05.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:40:05.344: INFO: namespace: e2e-tests-nsdeletetest-8xhf9, resource: bindings, ignored listing per whitelist May 1 16:40:05.377: INFO: namespace e2e-tests-nsdeletetest-8xhf9 deletion completed in 6.091625662s • [SLOW TEST:44.899 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:40:05.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 1 16:40:06.335: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:40:06.337: INFO: Number of nodes with available pods: 0 May 1 16:40:06.337: INFO: Node hunter-worker is running more than one daemon pod May 1 16:40:07.341: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:40:07.344: INFO: Number of nodes with available pods: 0 May 1 16:40:07.344: INFO: Node hunter-worker is running more than one daemon pod May 1 16:40:08.342: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:40:08.345: INFO: Number of nodes with available pods: 0 May 1 16:40:08.345: INFO: Node hunter-worker is running more than one daemon pod May 1 16:40:09.461: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:40:09.464: INFO: Number of nodes with available pods: 0 May 1 16:40:09.464: INFO: Node hunter-worker is running more than one daemon pod May 1 16:40:10.414: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:40:10.417: INFO: Number of nodes with available pods: 1 May 1 16:40:10.417: INFO: Node hunter-worker2 is running more than one daemon pod May 1 16:40:11.341: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:40:11.343: INFO: Number of nodes with available pods: 1 May 1 16:40:11.343: INFO: Node hunter-worker2 is running more than one daemon pod May 1 16:40:12.342: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:40:12.344: INFO: Number of nodes with available pods: 2 May 1 16:40:12.344: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 1 16:40:12.413: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:40:12.473: INFO: Number of nodes with available pods: 2 May 1 16:40:12.473: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6m2fg, will wait for the garbage collector to delete the pods May 1 16:40:13.780: INFO: Deleting DaemonSet.extensions daemon-set took: 18.268408ms May 1 16:40:13.980: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.25038ms May 1 16:40:21.283: INFO: Number of nodes with available pods: 0 May 1 16:40:21.283: INFO: Number of running nodes: 0, number of available pods: 0 May 1 16:40:21.285: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6m2fg/daemonsets","resourceVersion":"8207820"},"items":null} May 1 16:40:21.287: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6m2fg/pods","resourceVersion":"8207820"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:40:21.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6m2fg" for this suite. May 1 16:40:29.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:40:29.347: INFO: namespace: e2e-tests-daemonsets-6m2fg, resource: bindings, ignored listing per whitelist May 1 16:40:29.385: INFO: namespace e2e-tests-daemonsets-6m2fg deletion completed in 8.086512064s • [SLOW TEST:24.008 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:40:29.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 16:40:29.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 1 16:40:29.996: INFO: stderr: "" May 1 16:40:29.996: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T14:47:52Z\", GoVersion:\"go1.11.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 1 16:40:29.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-92nhk' May 1 16:40:40.650: INFO: stderr: "" May 1 16:40:40.650: INFO: stdout: "replicationcontroller/redis-master created\n" May 1 16:40:40.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-92nhk' May 1 16:40:41.113: INFO: stderr: "" May 1 16:40:41.113: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 1 16:40:42.118: INFO: Selector matched 1 pods for map[app:redis] May 1 16:40:42.118: INFO: Found 0 / 1 May 1 16:40:43.314: INFO: Selector matched 1 pods for map[app:redis] May 1 16:40:43.314: INFO: Found 0 / 1 May 1 16:40:44.270: INFO: Selector matched 1 pods for map[app:redis] May 1 16:40:44.270: INFO: Found 0 / 1 May 1 16:40:45.118: INFO: Selector matched 1 pods for map[app:redis] May 1 16:40:45.118: INFO: Found 0 / 1 May 1 16:40:46.144: INFO: Selector matched 1 pods for map[app:redis] May 1 16:40:46.144: INFO: Found 0 / 1 May 1 16:40:47.117: INFO: Selector matched 1 pods for map[app:redis] May 1 16:40:47.117: INFO: Found 1 / 1 May 1 16:40:47.117: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 16:40:47.119: INFO: Selector matched 1 pods for map[app:redis] May 1 16:40:47.119: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 16:40:47.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-6n54r --namespace=e2e-tests-kubectl-92nhk' May 1 16:40:47.318: INFO: stderr: "" May 1 16:40:47.318: INFO: stdout: "Name: redis-master-6n54r\nNamespace: e2e-tests-kubectl-92nhk\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Fri, 01 May 2020 16:40:40 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.132\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://10b33610cac34966dfb33f137a69026a5165cc6c7a6f7dafa6931e9601fb3769\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 01 May 2020 16:40:45 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-kcjn4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-kcjn4:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-kcjn4\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned e2e-tests-kubectl-92nhk/redis-master-6n54r to hunter-worker2\n Normal Pulled 5s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" May 1 16:40:47.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-92nhk' May 1 16:40:47.438: INFO: stderr: "" May 1 16:40:47.438: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-92nhk\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: redis-master-6n54r\n" May 1 16:40:47.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-92nhk' May 1 16:40:47.547: INFO: stderr: "" May 1 16:40:47.547: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-92nhk\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.107.92.71\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.132:6379\nSession Affinity: None\nEvents: \n" May 1 16:40:47.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 1 16:40:47.660: INFO: stderr: "" May 1 16:40:47.660: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 01 May 2020 16:40:43 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 01 May 2020 16:40:43 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 01 May 2020 16:40:43 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 01 May 2020 16:40:43 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 46d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 1 16:40:47.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-92nhk' May 1 16:40:47.921: INFO: stderr: "" May 1 16:40:47.921: INFO: stdout: "Name: e2e-tests-kubectl-92nhk\nLabels: e2e-framework=kubectl\n e2e-run=9296d13e-8bbb-11ea-acf7-0242ac110017\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:40:47.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-92nhk" for this suite. May 1 16:41:12.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:41:12.191: INFO: namespace: e2e-tests-kubectl-92nhk, resource: bindings, ignored listing per whitelist May 1 16:41:12.219: INFO: namespace e2e-tests-kubectl-92nhk deletion completed in 24.293821454s • [SLOW TEST:42.833 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:41:12.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 1 16:41:12.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f29jw' May 1 16:41:13.030: INFO: stderr: "" May 1 16:41:13.030: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 1 16:41:14.035: INFO: Selector matched 1 pods for map[app:redis] May 1 16:41:14.035: INFO: Found 0 / 1 May 1 16:41:15.274: INFO: Selector matched 1 pods for map[app:redis] May 1 16:41:15.274: INFO: Found 0 / 1 May 1 16:41:16.034: INFO: Selector matched 1 pods for map[app:redis] May 1 16:41:16.035: INFO: Found 0 / 1 May 1 16:41:17.606: INFO: Selector matched 1 pods for map[app:redis] May 1 16:41:17.606: INFO: Found 0 / 1 May 1 16:41:18.426: INFO: Selector matched 1 pods for map[app:redis] May 1 16:41:18.426: INFO: Found 0 / 1 May 1 16:41:19.163: INFO: Selector matched 1 pods for map[app:redis] May 1 16:41:19.163: INFO: Found 0 / 1 May 1 16:41:20.036: INFO: Selector matched 1 pods for map[app:redis] May 1 16:41:20.036: INFO: Found 0 / 1 May 1 16:41:21.035: INFO: Selector matched 1 pods for map[app:redis] May 1 16:41:21.035: INFO: Found 1 / 1 May 1 16:41:21.035: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 16:41:21.039: INFO: Selector matched 1 pods for map[app:redis] May 1 16:41:21.039: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 1 16:41:21.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-fkdg8 redis-master --namespace=e2e-tests-kubectl-f29jw' May 1 16:41:21.151: INFO: stderr: "" May 1 16:41:21.151: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 May 16:41:19.644 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 May 16:41:19.644 # Server started, Redis version 3.2.12\n1:M 01 May 16:41:19.644 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 May 16:41:19.644 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 1 16:41:21.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-fkdg8 redis-master --namespace=e2e-tests-kubectl-f29jw --tail=1' May 1 16:41:21.260: INFO: stderr: "" May 1 16:41:21.260: INFO: stdout: "1:M 01 May 16:41:19.644 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 1 16:41:21.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-fkdg8 redis-master --namespace=e2e-tests-kubectl-f29jw --limit-bytes=1' May 1 16:41:21.362: INFO: stderr: "" May 1 16:41:21.362: INFO: stdout: " " STEP: exposing timestamps May 1 16:41:21.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-fkdg8 redis-master --namespace=e2e-tests-kubectl-f29jw --tail=1 --timestamps' May 1 16:41:21.466: INFO: stderr: "" May 1 16:41:21.466: INFO: stdout: "2020-05-01T16:41:19.644836539Z 1:M 01 May 16:41:19.644 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 1 16:41:23.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-fkdg8 redis-master --namespace=e2e-tests-kubectl-f29jw --since=1s' May 1 16:41:24.408: INFO: stderr: "" May 1 16:41:24.408: INFO: stdout: "" May 1 16:41:24.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-fkdg8 redis-master --namespace=e2e-tests-kubectl-f29jw --since=24h' May 1 16:41:25.483: INFO: stderr: "" May 1 16:41:25.483: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 May 16:41:19.644 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 May 16:41:19.644 # Server started, Redis version 3.2.12\n1:M 01 May 16:41:19.644 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 May 16:41:19.644 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 1 16:41:25.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f29jw' May 1 16:41:25.612: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:41:25.612: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 1 16:41:25.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-f29jw' May 1 16:41:26.289: INFO: stderr: "No resources found.\n" May 1 16:41:26.289: INFO: stdout: "" May 1 16:41:26.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-f29jw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 16:41:26.380: INFO: stderr: "" May 1 16:41:26.380: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:41:26.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f29jw" for this suite. May 1 16:41:36.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:41:36.778: INFO: namespace: e2e-tests-kubectl-f29jw, resource: bindings, ignored listing per whitelist May 1 16:41:36.794: INFO: namespace e2e-tests-kubectl-f29jw deletion completed in 10.409959702s • [SLOW TEST:24.575 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:41:36.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 1 16:41:46.309: INFO: Successfully updated pod "labelsupdatea038d32e-8bca-11ea-acf7-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:41:48.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6hdrr" for this suite. May 1 16:42:14.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:42:14.415: INFO: namespace: e2e-tests-projected-6hdrr, resource: bindings, ignored listing per whitelist May 1 16:42:14.443: INFO: namespace e2e-tests-projected-6hdrr deletion completed in 26.085120146s • [SLOW TEST:37.649 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:42:14.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017 May 1 16:42:15.519: INFO: Pod name my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017: Found 0 pods out of 1 May 1 16:42:20.524: INFO: Pod name my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017: Found 1 pods out of 1 May 1 16:42:20.524: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017" are running May 1 16:42:22.532: INFO: Pod "my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017-bpn62" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 16:42:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 16:42:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 16:42:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 16:42:15 +0000 UTC Reason: Message:}]) May 1 16:42:22.532: INFO: Trying to dial the pod May 1 16:42:27.542: INFO: Controller my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017: Got expected result from replica 1 [my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017-bpn62]: "my-hostname-basic-b6b63009-8bca-11ea-acf7-0242ac110017-bpn62", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:42:27.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-vl4fz" for this suite. May 1 16:42:35.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:42:35.696: INFO: namespace: e2e-tests-replication-controller-vl4fz, resource: bindings, ignored listing per whitelist May 1 16:42:35.698: INFO: namespace e2e-tests-replication-controller-vl4fz deletion completed in 8.152273424s • [SLOW TEST:21.255 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:42:35.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 1 16:42:36.177: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:42:43.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-vxxcx" for this suite. May 1 16:42:55.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:42:55.531: INFO: namespace: e2e-tests-init-container-vxxcx, resource: bindings, ignored listing per whitelist May 1 16:42:55.575: INFO: namespace e2e-tests-init-container-vxxcx deletion completed in 12.180648605s • [SLOW TEST:19.877 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:42:55.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-cf1e87d1-8bca-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 16:42:56.486: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017" in namespace "e2e-tests-projected-cx89n" to be "success or failure" May 1 16:42:56.807: INFO: Pod "pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 321.000834ms May 1 16:42:58.811: INFO: Pod "pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32513906s May 1 16:43:00.816: INFO: Pod "pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330039993s May 1 16:43:02.870: INFO: Pod "pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.384376512s May 1 16:43:04.888: INFO: Pod "pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.402458639s STEP: Saw pod success May 1 16:43:04.888: INFO: Pod "pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:43:04.912: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 16:43:05.000: INFO: Waiting for pod pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017 to disappear May 1 16:43:05.074: INFO: Pod pod-projected-secrets-cf3de4b5-8bca-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:43:05.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cx89n" for this suite. May 1 16:43:11.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:43:11.314: INFO: namespace: e2e-tests-projected-cx89n, resource: bindings, ignored listing per whitelist May 1 16:43:11.323: INFO: namespace e2e-tests-projected-cx89n deletion completed in 6.247088848s • [SLOW TEST:15.748 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:43:11.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0501 16:43:41.962154 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:43:41.962: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:43:41.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4pbp9" for this suite. May 1 16:43:52.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:43:52.010: INFO: namespace: e2e-tests-gc-4pbp9, resource: bindings, ignored listing per whitelist May 1 16:43:52.231: INFO: namespace e2e-tests-gc-4pbp9 deletion completed in 10.266027734s • [SLOW TEST:40.907 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:43:52.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 16:43:52.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f09a887f-8bca-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-4rckm" to be "success or failure" May 1 16:43:52.605: INFO: Pod "downwardapi-volume-f09a887f-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 59.874576ms May 1 16:43:54.727: INFO: Pod "downwardapi-volume-f09a887f-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182043883s May 1 16:43:56.731: INFO: Pod "downwardapi-volume-f09a887f-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18591079s May 1 16:43:58.781: INFO: Pod "downwardapi-volume-f09a887f-8bca-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.236078942s STEP: Saw pod success May 1 16:43:58.781: INFO: Pod "downwardapi-volume-f09a887f-8bca-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:43:58.784: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f09a887f-8bca-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 16:43:58.854: INFO: Waiting for pod downwardapi-volume-f09a887f-8bca-11ea-acf7-0242ac110017 to disappear May 1 16:43:58.978: INFO: Pod downwardapi-volume-f09a887f-8bca-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:43:58.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4rckm" for this suite. May 1 16:44:05.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:44:05.278: INFO: namespace: e2e-tests-downward-api-4rckm, resource: bindings, ignored listing per whitelist May 1 16:44:05.323: INFO: namespace e2e-tests-downward-api-4rckm deletion completed in 6.315009633s • [SLOW TEST:13.091 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:44:05.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 1 16:44:05.515: INFO: Waiting up to 5m0s for pod "pod-f8573d12-8bca-11ea-acf7-0242ac110017" in namespace "e2e-tests-emptydir-ln968" to be "success or failure" May 1 16:44:05.565: INFO: Pod "pod-f8573d12-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 49.947808ms May 1 16:44:07.568: INFO: Pod "pod-f8573d12-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053103941s May 1 16:44:09.572: INFO: Pod "pod-f8573d12-8bca-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057191422s May 1 16:44:12.560: INFO: Pod "pod-f8573d12-8bca-11ea-acf7-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 7.045690409s May 1 16:44:14.563: INFO: Pod "pod-f8573d12-8bca-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.048833727s STEP: Saw pod success May 1 16:44:14.564: INFO: Pod "pod-f8573d12-8bca-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:44:14.566: INFO: Trying to get logs from node hunter-worker2 pod pod-f8573d12-8bca-11ea-acf7-0242ac110017 container test-container: STEP: delete the pod May 1 16:44:14.662: INFO: Waiting for pod pod-f8573d12-8bca-11ea-acf7-0242ac110017 to disappear May 1 16:44:14.674: INFO: Pod pod-f8573d12-8bca-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:44:14.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ln968" for this suite. May 1 16:44:20.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:44:20.766: INFO: namespace: e2e-tests-emptydir-ln968, resource: bindings, ignored listing per whitelist May 1 16:44:20.906: INFO: namespace e2e-tests-emptydir-ln968 deletion completed in 6.177725006s • [SLOW TEST:15.583 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:44:20.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 1 16:44:25.732: INFO: Successfully updated pod "annotationupdate01b4a4c3-8bcb-11ea-acf7-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:44:27.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g8xmq" for this suite. May 1 16:44:50.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:44:50.097: INFO: namespace: e2e-tests-downward-api-g8xmq, resource: bindings, ignored listing per whitelist May 1 16:44:50.114: INFO: namespace e2e-tests-downward-api-g8xmq deletion completed in 22.236357566s • [SLOW TEST:29.209 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:44:50.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 16:44:50.255: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13045538-8bcb-11ea-acf7-0242ac110017" in namespace "e2e-tests-downward-api-58tzm" to be "success or failure" May 1 16:44:50.258: INFO: Pod "downwardapi-volume-13045538-8bcb-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.949747ms May 1 16:44:52.262: INFO: Pod "downwardapi-volume-13045538-8bcb-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007226827s May 1 16:44:54.266: INFO: Pod "downwardapi-volume-13045538-8bcb-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011448029s May 1 16:44:56.271: INFO: Pod "downwardapi-volume-13045538-8bcb-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015858992s STEP: Saw pod success May 1 16:44:56.271: INFO: Pod "downwardapi-volume-13045538-8bcb-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:44:56.274: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-13045538-8bcb-11ea-acf7-0242ac110017 container client-container: STEP: delete the pod May 1 16:44:56.315: INFO: Waiting for pod downwardapi-volume-13045538-8bcb-11ea-acf7-0242ac110017 to disappear May 1 16:44:56.329: INFO: Pod downwardapi-volume-13045538-8bcb-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:44:56.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-58tzm" for this suite. May 1 16:45:02.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:45:02.387: INFO: namespace: e2e-tests-downward-api-58tzm, resource: bindings, ignored listing per whitelist May 1 16:45:02.429: INFO: namespace e2e-tests-downward-api-58tzm deletion completed in 6.096776117s • [SLOW TEST:12.314 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:45:02.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-nxw7r May 1 16:45:06.592: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-nxw7r STEP: checking the pod's current state and verifying that restartCount is present May 1 16:45:06.595: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:49:07.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nxw7r" for this suite. May 1 16:49:16.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:49:16.838: INFO: namespace: e2e-tests-container-probe-nxw7r, resource: bindings, ignored listing per whitelist May 1 16:49:17.300: INFO: namespace e2e-tests-container-probe-nxw7r deletion completed in 9.68678842s • [SLOW TEST:254.871 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:49:17.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-b286702d-8bcb-11ea-acf7-0242ac110017 STEP: Creating a pod to test consume secrets May 1 16:49:18.426: INFO: Waiting up to 5m0s for pod "pod-secrets-b2dfdae3-8bcb-11ea-acf7-0242ac110017" in namespace "e2e-tests-secrets-54rbb" to be "success or failure" May 1 16:49:18.492: INFO: Pod "pod-secrets-b2dfdae3-8bcb-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 65.624017ms May 1 16:49:20.496: INFO: Pod "pod-secrets-b2dfdae3-8bcb-11ea-acf7-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070036127s May 1 16:49:22.931: INFO: Pod "pod-secrets-b2dfdae3-8bcb-11ea-acf7-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.505228649s STEP: Saw pod success May 1 16:49:22.931: INFO: Pod "pod-secrets-b2dfdae3-8bcb-11ea-acf7-0242ac110017" satisfied condition "success or failure" May 1 16:49:23.079: INFO: Trying to get logs from node hunter-worker pod pod-secrets-b2dfdae3-8bcb-11ea-acf7-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 16:49:23.485: INFO: Waiting for pod pod-secrets-b2dfdae3-8bcb-11ea-acf7-0242ac110017 to disappear May 1 16:49:23.618: INFO: Pod pod-secrets-b2dfdae3-8bcb-11ea-acf7-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:49:23.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-54rbb" for this suite. May 1 16:49:33.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:49:33.721: INFO: namespace: e2e-tests-secrets-54rbb, resource: bindings, ignored listing per whitelist May 1 16:49:33.754: INFO: namespace e2e-tests-secrets-54rbb deletion completed in 10.133210119s STEP: Destroying namespace "e2e-tests-secret-namespace-6qdm4" for this suite. May 1 16:49:41.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:49:41.818: INFO: namespace: e2e-tests-secret-namespace-6qdm4, resource: bindings, ignored listing per whitelist May 1 16:49:41.861: INFO: namespace e2e-tests-secret-namespace-6qdm4 deletion completed in 8.107069534s • [SLOW TEST:24.561 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 16:49:41.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 16:49:42.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-2ggbz' May 1 16:49:42.593: INFO: stderr: "" May 1 16:49:42.593: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 1 16:49:42.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-2ggbz' May 1 16:49:45.818: INFO: stderr: "" May 1 16:49:45.818: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 16:49:45.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2ggbz" for this suite. May 1 16:49:54.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:49:54.279: INFO: namespace: e2e-tests-kubectl-2ggbz, resource: bindings, ignored listing per whitelist May 1 16:49:54.342: INFO: namespace e2e-tests-kubectl-2ggbz deletion completed in 8.172099322s • [SLOW TEST:12.481 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 1 16:49:54.342: INFO: Running AfterSuite actions on all nodes May 1 16:49:54.343: INFO: Running AfterSuite actions on node 1 May 1 16:49:54.343: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6961.284 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS