I0705 10:46:47.601892 6 e2e.go:224] Starting e2e run "d2f9b78f-beac-11ea-9e48-0242ac110017" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593946007 - Will randomize all specs Will run 201 of 2164 specs Jul 5 10:46:47.782: INFO: >>> kubeConfig: /root/.kube/config Jul 5 10:46:47.786: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 5 10:46:47.803: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 5 10:46:47.831: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 5 10:46:47.832: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 5 10:46:47.832: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 5 10:46:47.843: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 5 10:46:47.843: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 5 10:46:47.843: INFO: e2e test version: v1.13.12 Jul 5 10:46:47.844: INFO: kube-apiserver version: v1.13.12 SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:46:47.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Jul 5 10:46:47.944: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 5 10:46:47.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d373f3be-beac-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-kpd7v" to be "success or failure" Jul 5 10:46:47.982: INFO: Pod "downwardapi-volume-d373f3be-beac-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.645425ms Jul 5 10:46:50.248: INFO: Pod "downwardapi-volume-d373f3be-beac-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268016551s Jul 5 10:46:52.252: INFO: Pod "downwardapi-volume-d373f3be-beac-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.2727724s Jul 5 10:46:54.256: INFO: Pod "downwardapi-volume-d373f3be-beac-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.276536332s STEP: Saw pod success Jul 5 10:46:54.256: INFO: Pod "downwardapi-volume-d373f3be-beac-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 10:46:54.259: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-d373f3be-beac-11ea-9e48-0242ac110017 container client-container: STEP: delete the pod Jul 5 10:46:54.280: INFO: Waiting for pod downwardapi-volume-d373f3be-beac-11ea-9e48-0242ac110017 to disappear Jul 5 10:46:54.298: INFO: Pod downwardapi-volume-d373f3be-beac-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:46:54.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kpd7v" for this suite. Jul 5 10:47:00.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:47:00.428: INFO: namespace: e2e-tests-downward-api-kpd7v, resource: bindings, ignored listing per whitelist Jul 5 10:47:00.433: INFO: namespace e2e-tests-downward-api-kpd7v deletion completed in 6.13124703s • [SLOW TEST:12.589 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:47:00.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-9kkhv [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-9kkhv STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-9kkhv Jul 5 10:47:00.663: INFO: Found 0 stateful pods, waiting for 1 Jul 5 10:47:10.668: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 5 10:47:10.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9kkhv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 5 10:47:10.955: INFO: stderr: "I0705 10:47:10.802155 42 log.go:172] (0xc000138840) (0xc000786640) Create stream\nI0705 10:47:10.802235 42 log.go:172] (0xc000138840) (0xc000786640) Stream added, broadcasting: 1\nI0705 10:47:10.806214 42 log.go:172] (0xc000138840) Reply frame received for 1\nI0705 10:47:10.806262 42 log.go:172] (0xc000138840) (0xc00067cc80) Create stream\nI0705 10:47:10.806278 42 log.go:172] (0xc000138840) (0xc00067cc80) Stream added, broadcasting: 3\nI0705 10:47:10.807327 42 log.go:172] (0xc000138840) Reply frame received for 3\nI0705 10:47:10.807367 42 log.go:172] (0xc000138840) (0xc00067a000) Create stream\nI0705 10:47:10.807384 42 log.go:172] (0xc000138840) (0xc00067a000) Stream added, broadcasting: 5\nI0705 10:47:10.808371 42 log.go:172] (0xc000138840) Reply frame received for 5\nI0705 10:47:10.948720 42 log.go:172] (0xc000138840) Data frame received for 3\nI0705 10:47:10.948755 42 log.go:172] (0xc00067cc80) (3) Data frame handling\nI0705 10:47:10.948845 42 log.go:172] (0xc00067cc80) (3) Data frame sent\nI0705 10:47:10.948973 42 log.go:172] (0xc000138840) Data frame received for 3\nI0705 10:47:10.948986 42 log.go:172] (0xc00067cc80) (3) Data frame handling\nI0705 10:47:10.949025 42 log.go:172] (0xc000138840) Data frame received for 5\nI0705 10:47:10.949051 42 log.go:172] (0xc00067a000) (5) Data frame handling\nI0705 10:47:10.951150 42 log.go:172] (0xc000138840) Data frame received for 1\nI0705 10:47:10.951176 42 log.go:172] (0xc000786640) (1) Data frame handling\nI0705 10:47:10.951192 42 log.go:172] (0xc000786640) (1) Data frame sent\nI0705 10:47:10.951208 42 log.go:172] (0xc000138840) (0xc000786640) Stream removed, broadcasting: 1\nI0705 10:47:10.951257 42 log.go:172] (0xc000138840) Go away received\nI0705 10:47:10.951363 42 log.go:172] (0xc000138840) (0xc000786640) Stream removed, broadcasting: 1\nI0705 10:47:10.951380 42 log.go:172] (0xc000138840) (0xc00067cc80) Stream removed, broadcasting: 3\nI0705 10:47:10.951389 42 log.go:172] (0xc000138840) (0xc00067a000) Stream removed, broadcasting: 5\n" Jul 5 10:47:10.955: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 5 10:47:10.955: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 5 10:47:10.959: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 5 10:47:20.963: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 5 10:47:20.963: INFO: Waiting for statefulset status.replicas updated to 0 Jul 5 10:47:20.998: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999314s Jul 5 10:47:22.003: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.972590697s Jul 5 10:47:23.008: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.967927458s Jul 5 10:47:24.013: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.962831648s Jul 5 10:47:25.018: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.957641101s Jul 5 10:47:26.023: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.952541926s Jul 5 10:47:27.027: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.947732511s Jul 5 10:47:28.033: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.943259697s Jul 5 10:47:29.037: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.93790638s Jul 5 10:47:30.042: INFO: Verifying statefulset ss doesn't scale past 1 for another 933.28236ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-9kkhv Jul 5 10:47:31.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9kkhv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 5 10:47:31.272: INFO: stderr: "I0705 10:47:31.184418 65 log.go:172] (0xc0001386e0) (0xc00063f400) Create stream\nI0705 10:47:31.184494 65 log.go:172] (0xc0001386e0) (0xc00063f400) Stream added, broadcasting: 1\nI0705 10:47:31.187394 65 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0705 10:47:31.187442 65 log.go:172] (0xc0001386e0) (0xc000700000) Create stream\nI0705 10:47:31.187460 65 log.go:172] (0xc0001386e0) (0xc000700000) Stream added, broadcasting: 3\nI0705 10:47:31.188484 65 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0705 10:47:31.188527 65 log.go:172] (0xc0001386e0) (0xc00063f4a0) Create stream\nI0705 10:47:31.188539 65 log.go:172] (0xc0001386e0) (0xc00063f4a0) Stream added, broadcasting: 5\nI0705 10:47:31.190112 65 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0705 10:47:31.267073 65 log.go:172] (0xc0001386e0) Data frame received for 3\nI0705 10:47:31.267111 65 log.go:172] (0xc000700000) (3) Data frame handling\nI0705 10:47:31.267145 65 log.go:172] (0xc000700000) (3) Data frame sent\nI0705 10:47:31.267160 65 log.go:172] (0xc0001386e0) Data frame received for 3\nI0705 10:47:31.267176 65 log.go:172] (0xc000700000) (3) Data frame handling\nI0705 10:47:31.267456 65 log.go:172] (0xc0001386e0) Data frame received for 5\nI0705 10:47:31.267501 65 log.go:172] (0xc00063f4a0) (5) Data frame handling\nI0705 10:47:31.268849 65 log.go:172] (0xc0001386e0) Data frame received for 1\nI0705 10:47:31.268906 65 log.go:172] (0xc00063f400) (1) Data frame handling\nI0705 10:47:31.268933 65 log.go:172] (0xc00063f400) (1) Data frame sent\nI0705 10:47:31.268956 65 log.go:172] (0xc0001386e0) (0xc00063f400) Stream removed, broadcasting: 1\nI0705 10:47:31.268983 65 log.go:172] (0xc0001386e0) Go away received\nI0705 10:47:31.269470 65 log.go:172] (0xc0001386e0) (0xc00063f400) Stream removed, broadcasting: 1\nI0705 10:47:31.269512 65 log.go:172] (0xc0001386e0) (0xc000700000) Stream removed, broadcasting: 3\nI0705 10:47:31.269540 65 log.go:172] (0xc0001386e0) (0xc00063f4a0) Stream removed, broadcasting: 5\n" Jul 5 10:47:31.272: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 5 10:47:31.272: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 5 10:47:31.275: INFO: Found 1 stateful pods, waiting for 3 Jul 5 10:47:41.279: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 5 10:47:41.279: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 5 10:47:41.279: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 5 10:47:41.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9kkhv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 5 10:47:41.486: INFO: stderr: "I0705 10:47:41.417355 88 log.go:172] (0xc00080e2c0) (0xc0007145a0) Create stream\nI0705 10:47:41.417424 88 log.go:172] (0xc00080e2c0) (0xc0007145a0) Stream added, broadcasting: 1\nI0705 10:47:41.419440 88 log.go:172] (0xc00080e2c0) Reply frame received for 1\nI0705 10:47:41.419478 88 log.go:172] (0xc00080e2c0) (0xc000732000) Create stream\nI0705 10:47:41.419489 88 log.go:172] (0xc00080e2c0) (0xc000732000) Stream added, broadcasting: 3\nI0705 10:47:41.420370 88 log.go:172] (0xc00080e2c0) Reply frame received for 3\nI0705 10:47:41.420420 88 log.go:172] (0xc00080e2c0) (0xc000732140) Create stream\nI0705 10:47:41.420433 88 log.go:172] (0xc00080e2c0) (0xc000732140) Stream added, broadcasting: 5\nI0705 10:47:41.421333 88 log.go:172] (0xc00080e2c0) Reply frame received for 5\nI0705 10:47:41.480978 88 log.go:172] (0xc00080e2c0) Data frame received for 5\nI0705 10:47:41.481014 88 log.go:172] (0xc000732140) (5) Data frame handling\nI0705 10:47:41.481037 88 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0705 10:47:41.481045 88 log.go:172] (0xc000732000) (3) Data frame handling\nI0705 10:47:41.481053 88 log.go:172] (0xc000732000) (3) Data frame sent\nI0705 10:47:41.481059 88 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0705 10:47:41.481065 88 log.go:172] (0xc000732000) (3) Data frame handling\nI0705 10:47:41.482654 88 log.go:172] (0xc00080e2c0) Data frame received for 1\nI0705 10:47:41.482674 88 log.go:172] (0xc0007145a0) (1) Data frame handling\nI0705 10:47:41.482684 88 log.go:172] (0xc0007145a0) (1) Data frame sent\nI0705 10:47:41.482696 88 log.go:172] (0xc00080e2c0) (0xc0007145a0) Stream removed, broadcasting: 1\nI0705 10:47:41.482770 88 log.go:172] (0xc00080e2c0) Go away received\nI0705 10:47:41.482854 88 log.go:172] (0xc00080e2c0) (0xc0007145a0) Stream removed, broadcasting: 1\nI0705 10:47:41.482870 88 log.go:172] (0xc00080e2c0) (0xc000732000) Stream removed, broadcasting: 3\nI0705 10:47:41.482879 88 log.go:172] (0xc00080e2c0) (0xc000732140) Stream removed, broadcasting: 5\n" Jul 5 10:47:41.486: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 5 10:47:41.486: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 5 10:47:41.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9kkhv ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 5 10:47:41.725: INFO: stderr: "I0705 10:47:41.614714 111 log.go:172] (0xc00083e2c0) (0xc00076c640) Create stream\nI0705 10:47:41.614781 111 log.go:172] (0xc00083e2c0) (0xc00076c640) Stream added, broadcasting: 1\nI0705 10:47:41.620722 111 log.go:172] (0xc00083e2c0) Reply frame received for 1\nI0705 10:47:41.620768 111 log.go:172] (0xc00083e2c0) (0xc0000dcc80) Create stream\nI0705 10:47:41.620781 111 log.go:172] (0xc00083e2c0) (0xc0000dcc80) Stream added, broadcasting: 3\nI0705 10:47:41.621906 111 log.go:172] (0xc00083e2c0) Reply frame received for 3\nI0705 10:47:41.621975 111 log.go:172] (0xc00083e2c0) (0xc00066e000) Create stream\nI0705 10:47:41.621994 111 log.go:172] (0xc00083e2c0) (0xc00066e000) Stream added, broadcasting: 5\nI0705 10:47:41.622788 111 log.go:172] (0xc00083e2c0) Reply frame received for 5\nI0705 10:47:41.719541 111 log.go:172] (0xc00083e2c0) Data frame received for 3\nI0705 10:47:41.719588 111 log.go:172] (0xc0000dcc80) (3) Data frame handling\nI0705 10:47:41.719623 111 log.go:172] (0xc0000dcc80) (3) Data frame sent\nI0705 10:47:41.719967 111 log.go:172] (0xc00083e2c0) Data frame received for 3\nI0705 10:47:41.720009 111 log.go:172] (0xc0000dcc80) (3) Data frame handling\nI0705 10:47:41.720124 111 log.go:172] (0xc00083e2c0) Data frame received for 5\nI0705 10:47:41.720165 111 log.go:172] (0xc00066e000) (5) Data frame handling\nI0705 10:47:41.722257 111 log.go:172] (0xc00083e2c0) Data frame received for 1\nI0705 10:47:41.722299 111 log.go:172] (0xc00076c640) (1) Data frame handling\nI0705 10:47:41.722339 111 log.go:172] (0xc00076c640) (1) Data frame sent\nI0705 10:47:41.722368 111 log.go:172] (0xc00083e2c0) (0xc00076c640) Stream removed, broadcasting: 1\nI0705 10:47:41.722397 111 log.go:172] (0xc00083e2c0) Go away received\nI0705 10:47:41.722604 111 log.go:172] (0xc00083e2c0) (0xc00076c640) Stream removed, broadcasting: 1\nI0705 10:47:41.722623 111 log.go:172] (0xc00083e2c0) (0xc0000dcc80) Stream removed, broadcasting: 3\nI0705 10:47:41.722634 111 log.go:172] (0xc00083e2c0) (0xc00066e000) Stream removed, broadcasting: 5\n" Jul 5 10:47:41.725: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 5 10:47:41.725: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 5 10:47:41.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9kkhv ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 5 10:47:42.188: INFO: stderr: "I0705 10:47:41.855675 133 log.go:172] (0xc000154840) (0xc000748640) Create stream\nI0705 10:47:41.855768 133 log.go:172] (0xc000154840) (0xc000748640) Stream added, broadcasting: 1\nI0705 10:47:41.859654 133 log.go:172] (0xc000154840) Reply frame received for 1\nI0705 10:47:41.859714 133 log.go:172] (0xc000154840) (0xc0005e2b40) Create stream\nI0705 10:47:41.859899 133 log.go:172] (0xc000154840) (0xc0005e2b40) Stream added, broadcasting: 3\nI0705 10:47:41.861013 133 log.go:172] (0xc000154840) Reply frame received for 3\nI0705 10:47:41.861087 133 log.go:172] (0xc000154840) (0xc0002e4000) Create stream\nI0705 10:47:41.861105 133 log.go:172] (0xc000154840) (0xc0002e4000) Stream added, broadcasting: 5\nI0705 10:47:41.862411 133 log.go:172] (0xc000154840) Reply frame received for 5\nI0705 10:47:42.181011 133 log.go:172] (0xc000154840) Data frame received for 3\nI0705 10:47:42.181061 133 log.go:172] (0xc0005e2b40) (3) Data frame handling\nI0705 10:47:42.181485 133 log.go:172] (0xc000154840) Data frame received for 5\nI0705 10:47:42.181536 133 log.go:172] (0xc0002e4000) (5) Data frame handling\nI0705 10:47:42.181597 133 log.go:172] (0xc0005e2b40) (3) Data frame sent\nI0705 10:47:42.181656 133 log.go:172] (0xc000154840) Data frame received for 3\nI0705 10:47:42.181681 133 log.go:172] (0xc0005e2b40) (3) Data frame handling\nI0705 10:47:42.183623 133 log.go:172] (0xc000154840) Data frame received for 1\nI0705 10:47:42.183653 133 log.go:172] (0xc000748640) (1) Data frame handling\nI0705 10:47:42.183676 133 log.go:172] (0xc000748640) (1) Data frame sent\nI0705 10:47:42.183708 133 log.go:172] (0xc000154840) (0xc000748640) Stream removed, broadcasting: 1\nI0705 10:47:42.183732 133 log.go:172] (0xc000154840) Go away received\nI0705 10:47:42.184059 133 log.go:172] (0xc000154840) (0xc000748640) Stream removed, broadcasting: 1\nI0705 10:47:42.184091 133 log.go:172] (0xc000154840) (0xc0005e2b40) Stream removed, broadcasting: 3\nI0705 10:47:42.184121 133 log.go:172] (0xc000154840) (0xc0002e4000) Stream removed, broadcasting: 5\n" Jul 5 10:47:42.188: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 5 10:47:42.188: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 5 10:47:42.188: INFO: Waiting for statefulset status.replicas updated to 0 Jul 5 10:47:42.220: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jul 5 10:47:52.230: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 5 10:47:52.230: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 5 10:47:52.230: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 5 10:47:52.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999442s Jul 5 10:47:53.293: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992985899s Jul 5 10:47:54.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.94552564s Jul 5 10:47:55.304: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.939993199s Jul 5 10:47:56.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.934505969s Jul 5 10:47:57.358: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.885227161s Jul 5 10:47:58.364: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.880367753s Jul 5 10:47:59.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.874854596s Jul 5 10:48:00.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.869574712s Jul 5 10:48:01.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 864.677251ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-9kkhv Jul 5 10:48:02.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9kkhv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 5 10:48:02.616: INFO: stderr: "I0705 10:48:02.514223 154 log.go:172] (0xc0006d6370) (0xc0005c94a0) Create stream\nI0705 10:48:02.514294 154 log.go:172] (0xc0006d6370) (0xc0005c94a0) Stream added, broadcasting: 1\nI0705 10:48:02.516960 154 log.go:172] (0xc0006d6370) Reply frame received for 1\nI0705 10:48:02.517007 154 log.go:172] (0xc0006d6370) (0xc00073e000) Create stream\nI0705 10:48:02.517019 154 log.go:172] (0xc0006d6370) (0xc00073e000) Stream added, broadcasting: 3\nI0705 10:48:02.518244 154 log.go:172] (0xc0006d6370) Reply frame received for 3\nI0705 10:48:02.518289 154 log.go:172] (0xc0006d6370) (0xc0005c9540) Create stream\nI0705 10:48:02.518301 154 log.go:172] (0xc0006d6370) (0xc0005c9540) Stream added, broadcasting: 5\nI0705 10:48:02.519354 154 log.go:172] (0xc0006d6370) Reply frame received for 5\nI0705 10:48:02.611824 154 log.go:172] (0xc0006d6370) Data frame received for 5\nI0705 10:48:02.611879 154 log.go:172] (0xc0005c9540) (5) Data frame handling\nI0705 10:48:02.611911 154 log.go:172] (0xc0006d6370) Data frame received for 3\nI0705 10:48:02.611923 154 log.go:172] (0xc00073e000) (3) Data frame handling\nI0705 10:48:02.611943 154 log.go:172] (0xc00073e000) (3) Data frame sent\nI0705 10:48:02.611959 154 log.go:172] (0xc0006d6370) Data frame received for 3\nI0705 10:48:02.611980 154 log.go:172] (0xc00073e000) (3) Data frame handling\nI0705 10:48:02.613705 154 log.go:172] (0xc0006d6370) Data frame received for 1\nI0705 10:48:02.613733 154 log.go:172] (0xc0005c94a0) (1) Data frame handling\nI0705 10:48:02.613776 154 log.go:172] (0xc0005c94a0) (1) Data frame sent\nI0705 10:48:02.613805 154 log.go:172] (0xc0006d6370) (0xc0005c94a0) Stream removed, broadcasting: 1\nI0705 10:48:02.613836 154 log.go:172] (0xc0006d6370) Go away received\nI0705 10:48:02.613978 154 log.go:172] (0xc0006d6370) (0xc0005c94a0) Stream removed, broadcasting: 1\nI0705 10:48:02.613998 154 log.go:172] (0xc0006d6370) (0xc00073e000) Stream removed, broadcasting: 3\nI0705 10:48:02.614008 154 log.go:172] (0xc0006d6370) (0xc0005c9540) Stream removed, broadcasting: 5\n" Jul 5 10:48:02.616: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 5 10:48:02.616: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 5 10:48:02.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9kkhv ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 5 10:48:02.817: INFO: stderr: "I0705 10:48:02.749950 175 log.go:172] (0xc000138630) (0xc00070a640) Create stream\nI0705 10:48:02.750023 175 log.go:172] (0xc000138630) (0xc00070a640) Stream added, broadcasting: 1\nI0705 10:48:02.752498 175 log.go:172] (0xc000138630) Reply frame received for 1\nI0705 10:48:02.752542 175 log.go:172] (0xc000138630) (0xc000344c80) Create stream\nI0705 10:48:02.752556 175 log.go:172] (0xc000138630) (0xc000344c80) Stream added, broadcasting: 3\nI0705 10:48:02.753708 175 log.go:172] (0xc000138630) Reply frame received for 3\nI0705 10:48:02.753760 175 log.go:172] (0xc000138630) (0xc00070a6e0) Create stream\nI0705 10:48:02.753778 175 log.go:172] (0xc000138630) (0xc00070a6e0) Stream added, broadcasting: 5\nI0705 10:48:02.754762 175 log.go:172] (0xc000138630) Reply frame received for 5\nI0705 10:48:02.812299 175 log.go:172] (0xc000138630) Data frame received for 3\nI0705 10:48:02.812357 175 log.go:172] (0xc000344c80) (3) Data frame handling\nI0705 10:48:02.812374 175 log.go:172] (0xc000344c80) (3) Data frame sent\nI0705 10:48:02.812387 175 log.go:172] (0xc000138630) Data frame received for 3\nI0705 10:48:02.812398 175 log.go:172] (0xc000344c80) (3) Data frame handling\nI0705 10:48:02.812449 175 log.go:172] (0xc000138630) Data frame received for 5\nI0705 10:48:02.812479 175 log.go:172] (0xc00070a6e0) (5) Data frame handling\nI0705 10:48:02.813925 175 log.go:172] (0xc000138630) Data frame received for 1\nI0705 10:48:02.813946 175 log.go:172] (0xc00070a640) (1) Data frame handling\nI0705 10:48:02.813958 175 log.go:172] (0xc00070a640) (1) Data frame sent\nI0705 10:48:02.813990 175 log.go:172] (0xc000138630) (0xc00070a640) Stream removed, broadcasting: 1\nI0705 10:48:02.814011 175 log.go:172] (0xc000138630) Go away received\nI0705 10:48:02.814283 175 log.go:172] (0xc000138630) (0xc00070a640) Stream removed, broadcasting: 1\nI0705 10:48:02.814309 175 log.go:172] (0xc000138630) (0xc000344c80) Stream removed, broadcasting: 3\nI0705 10:48:02.814332 175 log.go:172] (0xc000138630) (0xc00070a6e0) Stream removed, broadcasting: 5\n" Jul 5 10:48:02.817: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 5 10:48:02.817: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 5 10:48:02.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9kkhv ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 5 10:48:03.006: INFO: stderr: "I0705 10:48:02.940071 197 log.go:172] (0xc00083e160) (0xc00067c640) Create stream\nI0705 10:48:02.940135 197 log.go:172] (0xc00083e160) (0xc00067c640) Stream added, broadcasting: 1\nI0705 10:48:02.942819 197 log.go:172] (0xc00083e160) Reply frame received for 1\nI0705 10:48:02.942874 197 log.go:172] (0xc00083e160) (0xc0002f2f00) Create stream\nI0705 10:48:02.942891 197 log.go:172] (0xc00083e160) (0xc0002f2f00) Stream added, broadcasting: 3\nI0705 10:48:02.943847 197 log.go:172] (0xc00083e160) Reply frame received for 3\nI0705 10:48:02.943895 197 log.go:172] (0xc00083e160) (0xc0003e8000) Create stream\nI0705 10:48:02.943913 197 log.go:172] (0xc00083e160) (0xc0003e8000) Stream added, broadcasting: 5\nI0705 10:48:02.944828 197 log.go:172] (0xc00083e160) Reply frame received for 5\nI0705 10:48:03.000739 197 log.go:172] (0xc00083e160) Data frame received for 3\nI0705 10:48:03.000778 197 log.go:172] (0xc0002f2f00) (3) Data frame handling\nI0705 10:48:03.000797 197 log.go:172] (0xc0002f2f00) (3) Data frame sent\nI0705 10:48:03.000808 197 log.go:172] (0xc00083e160) Data frame received for 3\nI0705 10:48:03.000817 197 log.go:172] (0xc0002f2f00) (3) Data frame handling\nI0705 10:48:03.000872 197 log.go:172] (0xc00083e160) Data frame received for 5\nI0705 10:48:03.000939 197 log.go:172] (0xc0003e8000) (5) Data frame handling\nI0705 10:48:03.002050 197 log.go:172] (0xc00083e160) Data frame received for 1\nI0705 10:48:03.002069 197 log.go:172] (0xc00067c640) (1) Data frame handling\nI0705 10:48:03.002088 197 log.go:172] (0xc00067c640) (1) Data frame sent\nI0705 10:48:03.002107 197 log.go:172] (0xc00083e160) (0xc00067c640) Stream removed, broadcasting: 1\nI0705 10:48:03.002213 197 log.go:172] (0xc00083e160) Go away received\nI0705 10:48:03.002306 197 log.go:172] (0xc00083e160) (0xc00067c640) Stream removed, broadcasting: 1\nI0705 10:48:03.002326 197 log.go:172] (0xc00083e160) (0xc0002f2f00) Stream removed, broadcasting: 3\nI0705 10:48:03.002341 197 log.go:172] (0xc00083e160) (0xc0003e8000) Stream removed, broadcasting: 5\n" Jul 5 10:48:03.006: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 5 10:48:03.006: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 5 10:48:03.006: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 5 10:48:23.020: INFO: Deleting all statefulset in ns e2e-tests-statefulset-9kkhv Jul 5 10:48:23.023: INFO: Scaling statefulset ss to 0 Jul 5 10:48:23.032: INFO: Waiting for statefulset status.replicas updated to 0 Jul 5 10:48:23.035: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:48:23.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-9kkhv" for this suite. Jul 5 10:48:29.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:48:29.151: INFO: namespace: e2e-tests-statefulset-9kkhv, resource: bindings, ignored listing per whitelist Jul 5 10:48:29.153: INFO: namespace e2e-tests-statefulset-9kkhv deletion completed in 6.099860606s • [SLOW TEST:88.720 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:48:29.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-kzng2 I0705 10:48:29.342684 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-kzng2, replica count: 1 I0705 10:48:30.393396 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0705 10:48:31.393637 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0705 10:48:32.393904 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0705 10:48:33.394236 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 5 10:48:33.531: INFO: Created: latency-svc-vgj2d Jul 5 10:48:33.571: INFO: Got endpoints: latency-svc-vgj2d [76.570128ms] Jul 5 10:48:33.639: INFO: Created: latency-svc-xmgct Jul 5 10:48:33.654: INFO: Got endpoints: latency-svc-xmgct [83.540407ms] Jul 5 10:48:33.675: INFO: Created: latency-svc-nw2t2 Jul 5 10:48:33.690: INFO: Got endpoints: latency-svc-nw2t2 [119.506035ms] Jul 5 10:48:33.790: INFO: Created: latency-svc-wbb9m Jul 5 10:48:33.794: INFO: Got endpoints: latency-svc-wbb9m [223.000574ms] Jul 5 10:48:33.831: INFO: Created: latency-svc-6fdb9 Jul 5 10:48:33.840: INFO: Got endpoints: latency-svc-6fdb9 [269.196321ms] Jul 5 10:48:33.861: INFO: Created: latency-svc-w2vh5 Jul 5 10:48:33.879: INFO: Got endpoints: latency-svc-w2vh5 [308.004484ms] Jul 5 10:48:33.958: INFO: Created: latency-svc-t724k Jul 5 10:48:33.985: INFO: Got endpoints: latency-svc-t724k [413.941024ms] Jul 5 10:48:34.033: INFO: Created: latency-svc-l2594 Jul 5 10:48:34.051: INFO: Got endpoints: latency-svc-l2594 [479.951755ms] Jul 5 10:48:34.113: INFO: Created: latency-svc-7lz7g Jul 5 10:48:34.122: INFO: Got endpoints: latency-svc-7lz7g [551.480059ms] Jul 5 10:48:34.150: INFO: Created: latency-svc-8xp4s Jul 5 10:48:34.177: INFO: Got endpoints: latency-svc-8xp4s [606.469589ms] Jul 5 10:48:34.212: INFO: Created: latency-svc-8jhxg Jul 5 10:48:34.256: INFO: Got endpoints: latency-svc-8jhxg [685.320163ms] Jul 5 10:48:34.284: INFO: Created: latency-svc-9fd67 Jul 5 10:48:34.303: INFO: Got endpoints: latency-svc-9fd67 [732.351076ms] Jul 5 10:48:34.327: INFO: Created: latency-svc-tdczr Jul 5 10:48:34.345: INFO: Got endpoints: latency-svc-tdczr [774.558791ms] Jul 5 10:48:34.400: INFO: Created: latency-svc-vnwsk Jul 5 10:48:34.404: INFO: Got endpoints: latency-svc-vnwsk [832.98988ms] Jul 5 10:48:34.431: INFO: Created: latency-svc-bhcwj Jul 5 10:48:34.448: INFO: Got endpoints: latency-svc-bhcwj [876.893108ms] Jul 5 10:48:34.483: INFO: Created: latency-svc-nfptp Jul 5 10:48:34.496: INFO: Got endpoints: latency-svc-nfptp [925.118778ms] Jul 5 10:48:34.550: INFO: Created: latency-svc-prgvq Jul 5 10:48:34.556: INFO: Got endpoints: latency-svc-prgvq [901.630148ms] Jul 5 10:48:34.585: INFO: Created: latency-svc-wrbkj Jul 5 10:48:34.599: INFO: Got endpoints: latency-svc-wrbkj [908.425145ms] Jul 5 10:48:34.629: INFO: Created: latency-svc-72w2r Jul 5 10:48:34.646: INFO: Got endpoints: latency-svc-72w2r [852.758963ms] Jul 5 10:48:34.717: INFO: Created: latency-svc-8x8lf Jul 5 10:48:34.721: INFO: Got endpoints: latency-svc-8x8lf [881.422762ms] Jul 5 10:48:34.752: INFO: Created: latency-svc-7tjkl Jul 5 10:48:34.767: INFO: Got endpoints: latency-svc-7tjkl [887.999937ms] Jul 5 10:48:34.796: INFO: Created: latency-svc-xl5d4 Jul 5 10:48:34.903: INFO: Got endpoints: latency-svc-xl5d4 [918.359421ms] Jul 5 10:48:34.933: INFO: Created: latency-svc-stl5g Jul 5 10:48:34.965: INFO: Got endpoints: latency-svc-stl5g [914.248011ms] Jul 5 10:48:34.995: INFO: Created: latency-svc-4nxsx Jul 5 10:48:35.053: INFO: Got endpoints: latency-svc-4nxsx [930.159907ms] Jul 5 10:48:35.080: INFO: Created: latency-svc-zqjgt Jul 5 10:48:35.112: INFO: Got endpoints: latency-svc-zqjgt [934.632492ms] Jul 5 10:48:35.203: INFO: Created: latency-svc-2cfvv Jul 5 10:48:35.207: INFO: Got endpoints: latency-svc-2cfvv [951.220342ms] Jul 5 10:48:35.241: INFO: Created: latency-svc-tmgv4 Jul 5 10:48:35.258: INFO: Got endpoints: latency-svc-tmgv4 [954.64276ms] Jul 5 10:48:35.295: INFO: Created: latency-svc-tx8dp Jul 5 10:48:35.330: INFO: Got endpoints: latency-svc-tx8dp [984.424338ms] Jul 5 10:48:35.352: INFO: Created: latency-svc-jpfct Jul 5 10:48:35.384: INFO: Got endpoints: latency-svc-jpfct [980.501348ms] Jul 5 10:48:35.472: INFO: Created: latency-svc-b8jvf Jul 5 10:48:35.476: INFO: Got endpoints: latency-svc-b8jvf [1.028623707s] Jul 5 10:48:35.504: INFO: Created: latency-svc-wckp8 Jul 5 10:48:35.523: INFO: Got endpoints: latency-svc-wckp8 [1.026847702s] Jul 5 10:48:35.553: INFO: Created: latency-svc-5rkml Jul 5 10:48:35.571: INFO: Got endpoints: latency-svc-5rkml [1.014886271s] Jul 5 10:48:35.619: INFO: Created: latency-svc-58ln2 Jul 5 10:48:35.631: INFO: Got endpoints: latency-svc-58ln2 [1.032221404s] Jul 5 10:48:35.682: INFO: Created: latency-svc-t9hdm Jul 5 10:48:35.703: INFO: Got endpoints: latency-svc-t9hdm [1.056482509s] Jul 5 10:48:35.759: INFO: Created: latency-svc-z2qlg Jul 5 10:48:35.762: INFO: Got endpoints: latency-svc-z2qlg [1.041051237s] Jul 5 10:48:35.804: INFO: Created: latency-svc-vfzz7 Jul 5 10:48:35.830: INFO: Got endpoints: latency-svc-vfzz7 [1.062571588s] Jul 5 10:48:35.921: INFO: Created: latency-svc-2h6hn Jul 5 10:48:35.925: INFO: Got endpoints: latency-svc-2h6hn [1.021633898s] Jul 5 10:48:35.970: INFO: Created: latency-svc-ldnq7 Jul 5 10:48:35.986: INFO: Got endpoints: latency-svc-ldnq7 [1.021177163s] Jul 5 10:48:36.071: INFO: Created: latency-svc-gtg7g Jul 5 10:48:36.073: INFO: Got endpoints: latency-svc-gtg7g [1.020621846s] Jul 5 10:48:36.104: INFO: Created: latency-svc-fkcv2 Jul 5 10:48:36.118: INFO: Got endpoints: latency-svc-fkcv2 [1.006238858s] Jul 5 10:48:36.148: INFO: Created: latency-svc-qcrgp Jul 5 10:48:36.155: INFO: Got endpoints: latency-svc-qcrgp [947.842778ms] Jul 5 10:48:36.257: INFO: Created: latency-svc-fggrh Jul 5 10:48:36.259: INFO: Got endpoints: latency-svc-fggrh [1.001242968s] Jul 5 10:48:37.672: INFO: Created: latency-svc-b4nml Jul 5 10:48:37.688: INFO: Got endpoints: latency-svc-b4nml [2.358089075s] Jul 5 10:48:37.766: INFO: Created: latency-svc-48nfn Jul 5 10:48:37.769: INFO: Got endpoints: latency-svc-48nfn [2.384631366s] Jul 5 10:48:37.800: INFO: Created: latency-svc-wc4bn Jul 5 10:48:37.814: INFO: Got endpoints: latency-svc-wc4bn [2.337922257s] Jul 5 10:48:37.842: INFO: Created: latency-svc-gw58r Jul 5 10:48:37.851: INFO: Got endpoints: latency-svc-gw58r [2.32770765s] Jul 5 10:48:37.909: INFO: Created: latency-svc-7plm6 Jul 5 10:48:37.912: INFO: Got endpoints: latency-svc-7plm6 [2.340730171s] Jul 5 10:48:37.959: INFO: Created: latency-svc-75q7w Jul 5 10:48:37.990: INFO: Got endpoints: latency-svc-75q7w [2.358559366s] Jul 5 10:48:38.090: INFO: Created: latency-svc-8pwpl Jul 5 10:48:38.091: INFO: Got endpoints: latency-svc-8pwpl [2.388439438s] Jul 5 10:48:38.136: INFO: Created: latency-svc-rzcgx Jul 5 10:48:38.146: INFO: Got endpoints: latency-svc-rzcgx [2.383356088s] Jul 5 10:48:38.188: INFO: Created: latency-svc-f9hxb Jul 5 10:48:38.316: INFO: Got endpoints: latency-svc-f9hxb [2.486667711s] Jul 5 10:48:38.334: INFO: Created: latency-svc-87hkx Jul 5 10:48:38.357: INFO: Got endpoints: latency-svc-87hkx [2.43156146s] Jul 5 10:48:38.392: INFO: Created: latency-svc-7plzq Jul 5 10:48:38.410: INFO: Got endpoints: latency-svc-7plzq [2.423839199s] Jul 5 10:48:38.472: INFO: Created: latency-svc-kvvr6 Jul 5 10:48:38.476: INFO: Got endpoints: latency-svc-kvvr6 [2.402324786s] Jul 5 10:48:38.507: INFO: Created: latency-svc-8hhff Jul 5 10:48:38.525: INFO: Got endpoints: latency-svc-8hhff [2.406606634s] Jul 5 10:48:38.550: INFO: Created: latency-svc-cc25z Jul 5 10:48:38.652: INFO: Created: latency-svc-45c7j Jul 5 10:48:38.655: INFO: Got endpoints: latency-svc-45c7j [2.395716226s] Jul 5 10:48:38.655: INFO: Got endpoints: latency-svc-cc25z [2.499667657s] Jul 5 10:48:38.691: INFO: Created: latency-svc-k4svh Jul 5 10:48:38.706: INFO: Got endpoints: latency-svc-k4svh [1.017451891s] Jul 5 10:48:38.733: INFO: Created: latency-svc-jj5rn Jul 5 10:48:38.748: INFO: Got endpoints: latency-svc-jj5rn [978.699695ms] Jul 5 10:48:38.809: INFO: Created: latency-svc-67jpz Jul 5 10:48:38.833: INFO: Got endpoints: latency-svc-67jpz [1.01858144s] Jul 5 10:48:38.867: INFO: Created: latency-svc-k7c2d Jul 5 10:48:38.897: INFO: Got endpoints: latency-svc-k7c2d [1.046132148s] Jul 5 10:48:38.975: INFO: Created: latency-svc-cpxd6 Jul 5 10:48:38.990: INFO: Got endpoints: latency-svc-cpxd6 [1.078618439s] Jul 5 10:48:39.021: INFO: Created: latency-svc-lcqqb Jul 5 10:48:39.051: INFO: Got endpoints: latency-svc-lcqqb [1.061019553s] Jul 5 10:48:39.126: INFO: Created: latency-svc-jd2ps Jul 5 10:48:39.138: INFO: Got endpoints: latency-svc-jd2ps [1.046596607s] Jul 5 10:48:39.179: INFO: Created: latency-svc-q27gk Jul 5 10:48:39.287: INFO: Got endpoints: latency-svc-q27gk [1.140712289s] Jul 5 10:48:39.290: INFO: Created: latency-svc-qpmrm Jul 5 10:48:39.300: INFO: Got endpoints: latency-svc-qpmrm [984.088716ms] Jul 5 10:48:39.327: INFO: Created: latency-svc-9chcn Jul 5 10:48:39.343: INFO: Got endpoints: latency-svc-9chcn [986.409041ms] Jul 5 10:48:39.377: INFO: Created: latency-svc-b26c7 Jul 5 10:48:39.502: INFO: Got endpoints: latency-svc-b26c7 [1.091955342s] Jul 5 10:48:39.510: INFO: Created: latency-svc-g7gwg Jul 5 10:48:39.518: INFO: Got endpoints: latency-svc-g7gwg [1.042129029s] Jul 5 10:48:39.592: INFO: Created: latency-svc-tdsv5 Jul 5 10:48:39.705: INFO: Got endpoints: latency-svc-tdsv5 [1.180014375s] Jul 5 10:48:40.716: INFO: Created: latency-svc-q47zc Jul 5 10:48:40.783: INFO: Got endpoints: latency-svc-q47zc [2.127554577s] Jul 5 10:48:40.807: INFO: Created: latency-svc-96z5n Jul 5 10:48:40.839: INFO: Got endpoints: latency-svc-96z5n [2.183856726s] Jul 5 10:48:40.869: INFO: Created: latency-svc-jm4kk Jul 5 10:48:40.939: INFO: Got endpoints: latency-svc-jm4kk [2.233454768s] Jul 5 10:48:40.953: INFO: Created: latency-svc-p9nv9 Jul 5 10:48:40.975: INFO: Got endpoints: latency-svc-p9nv9 [2.226827113s] Jul 5 10:48:41.005: INFO: Created: latency-svc-95xdb Jul 5 10:48:41.035: INFO: Got endpoints: latency-svc-95xdb [2.20154379s] Jul 5 10:48:41.102: INFO: Created: latency-svc-8plzz Jul 5 10:48:41.113: INFO: Got endpoints: latency-svc-8plzz [2.215631312s] Jul 5 10:48:41.139: INFO: Created: latency-svc-tr9x6 Jul 5 10:48:41.155: INFO: Got endpoints: latency-svc-tr9x6 [2.164225492s] Jul 5 10:48:41.190: INFO: Created: latency-svc-wj7d7 Jul 5 10:48:41.274: INFO: Got endpoints: latency-svc-wj7d7 [2.223708047s] Jul 5 10:48:41.277: INFO: Created: latency-svc-kq5nt Jul 5 10:48:41.287: INFO: Got endpoints: latency-svc-kq5nt [2.149158079s] Jul 5 10:48:41.317: INFO: Created: latency-svc-qgqwf Jul 5 10:48:41.336: INFO: Got endpoints: latency-svc-qgqwf [2.049038593s] Jul 5 10:48:41.355: INFO: Created: latency-svc-s76t5 Jul 5 10:48:41.372: INFO: Got endpoints: latency-svc-s76t5 [2.071536774s] Jul 5 10:48:41.448: INFO: Created: latency-svc-9h5q6 Jul 5 10:48:41.456: INFO: Got endpoints: latency-svc-9h5q6 [2.112360105s] Jul 5 10:48:41.515: INFO: Created: latency-svc-2q9h2 Jul 5 10:48:41.522: INFO: Got endpoints: latency-svc-2q9h2 [2.019539556s] Jul 5 10:48:41.592: INFO: Created: latency-svc-gw4gd Jul 5 10:48:41.594: INFO: Got endpoints: latency-svc-gw4gd [2.076254551s] Jul 5 10:48:41.625: INFO: Created: latency-svc-96pw5 Jul 5 10:48:41.630: INFO: Got endpoints: latency-svc-96pw5 [1.925070803s] Jul 5 10:48:41.662: INFO: Created: latency-svc-zbnn5 Jul 5 10:48:41.667: INFO: Got endpoints: latency-svc-zbnn5 [884.209056ms] Jul 5 10:48:41.747: INFO: Created: latency-svc-7rv2f Jul 5 10:48:41.750: INFO: Got endpoints: latency-svc-7rv2f [911.250993ms] Jul 5 10:48:41.784: INFO: Created: latency-svc-dphd6 Jul 5 10:48:41.799: INFO: Got endpoints: latency-svc-dphd6 [859.846842ms] Jul 5 10:48:41.821: INFO: Created: latency-svc-pnw7d Jul 5 10:48:41.835: INFO: Got endpoints: latency-svc-pnw7d [860.614728ms] Jul 5 10:48:41.903: INFO: Created: latency-svc-mbx4x Jul 5 10:48:41.908: INFO: Got endpoints: latency-svc-mbx4x [873.25793ms] Jul 5 10:48:41.943: INFO: Created: latency-svc-gt5dj Jul 5 10:48:41.956: INFO: Got endpoints: latency-svc-gt5dj [843.166326ms] Jul 5 10:48:41.985: INFO: Created: latency-svc-wdwbc Jul 5 10:48:41.998: INFO: Got endpoints: latency-svc-wdwbc [843.445981ms] Jul 5 10:48:42.072: INFO: Created: latency-svc-smphw Jul 5 10:48:42.095: INFO: Got endpoints: latency-svc-smphw [819.966505ms] Jul 5 10:48:42.132: INFO: Created: latency-svc-5mjrs Jul 5 10:48:42.161: INFO: Got endpoints: latency-svc-5mjrs [873.634177ms] Jul 5 10:48:42.233: INFO: Created: latency-svc-c2x2j Jul 5 10:48:42.235: INFO: Got endpoints: latency-svc-c2x2j [899.057574ms] Jul 5 10:48:42.289: INFO: Created: latency-svc-mkh8w Jul 5 10:48:42.311: INFO: Got endpoints: latency-svc-mkh8w [939.24237ms] Jul 5 10:48:42.388: INFO: Created: latency-svc-zhwv4 Jul 5 10:48:42.391: INFO: Got endpoints: latency-svc-zhwv4 [935.529526ms] Jul 5 10:48:42.420: INFO: Created: latency-svc-rmccs Jul 5 10:48:42.438: INFO: Got endpoints: latency-svc-rmccs [915.800518ms] Jul 5 10:48:42.465: INFO: Created: latency-svc-wsvh9 Jul 5 10:48:42.474: INFO: Got endpoints: latency-svc-wsvh9 [879.362234ms] Jul 5 10:48:42.539: INFO: Created: latency-svc-l4k5k Jul 5 10:48:42.542: INFO: Got endpoints: latency-svc-l4k5k [911.748899ms] Jul 5 10:48:42.618: INFO: Created: latency-svc-bbh9j Jul 5 10:48:42.636: INFO: Got endpoints: latency-svc-bbh9j [968.534398ms] Jul 5 10:48:42.742: INFO: Created: latency-svc-nmtqm Jul 5 10:48:42.748: INFO: Got endpoints: latency-svc-nmtqm [997.428519ms] Jul 5 10:48:42.795: INFO: Created: latency-svc-9vwbt Jul 5 10:48:42.804: INFO: Got endpoints: latency-svc-9vwbt [1.005228308s] Jul 5 10:48:42.831: INFO: Created: latency-svc-ts5hh Jul 5 10:48:42.841: INFO: Got endpoints: latency-svc-ts5hh [1.005525711s] Jul 5 10:48:42.903: INFO: Created: latency-svc-8bd5q Jul 5 10:48:42.906: INFO: Got endpoints: latency-svc-8bd5q [998.225293ms] Jul 5 10:48:42.978: INFO: Created: latency-svc-pr8d7 Jul 5 10:48:42.998: INFO: Got endpoints: latency-svc-pr8d7 [1.041510605s] Jul 5 10:48:43.053: INFO: Created: latency-svc-j5bp7 Jul 5 10:48:43.056: INFO: Got endpoints: latency-svc-j5bp7 [1.057746983s] Jul 5 10:48:43.107: INFO: Created: latency-svc-t496p Jul 5 10:48:43.117: INFO: Got endpoints: latency-svc-t496p [1.022626657s] Jul 5 10:48:43.202: INFO: Created: latency-svc-d4tj7 Jul 5 10:48:43.207: INFO: Got endpoints: latency-svc-d4tj7 [1.045882138s] Jul 5 10:48:43.236: INFO: Created: latency-svc-xq66z Jul 5 10:48:43.268: INFO: Got endpoints: latency-svc-xq66z [1.032849786s] Jul 5 10:48:43.293: INFO: Created: latency-svc-ktb88 Jul 5 10:48:43.358: INFO: Got endpoints: latency-svc-ktb88 [1.046481373s] Jul 5 10:48:43.370: INFO: Created: latency-svc-vm5b7 Jul 5 10:48:43.382: INFO: Got endpoints: latency-svc-vm5b7 [990.657514ms] Jul 5 10:48:43.415: INFO: Created: latency-svc-rd9mg Jul 5 10:48:43.430: INFO: Got endpoints: latency-svc-rd9mg [992.448842ms] Jul 5 10:48:43.458: INFO: Created: latency-svc-c6lht Jul 5 10:48:43.544: INFO: Got endpoints: latency-svc-c6lht [1.069752984s] Jul 5 10:48:43.580: INFO: Created: latency-svc-kmwxc Jul 5 10:48:43.599: INFO: Got endpoints: latency-svc-kmwxc [1.056966513s] Jul 5 10:48:43.638: INFO: Created: latency-svc-869ks Jul 5 10:48:43.717: INFO: Got endpoints: latency-svc-869ks [1.081442571s] Jul 5 10:48:43.719: INFO: Created: latency-svc-9vjmc Jul 5 10:48:43.737: INFO: Got endpoints: latency-svc-9vjmc [989.536296ms] Jul 5 10:48:43.766: INFO: Created: latency-svc-9ht7x Jul 5 10:48:43.779: INFO: Got endpoints: latency-svc-9ht7x [974.870573ms] Jul 5 10:48:43.885: INFO: Created: latency-svc-8wn52 Jul 5 10:48:43.887: INFO: Got endpoints: latency-svc-8wn52 [1.046479962s] Jul 5 10:48:43.919: INFO: Created: latency-svc-kprm6 Jul 5 10:48:43.936: INFO: Got endpoints: latency-svc-kprm6 [1.029719808s] Jul 5 10:48:43.974: INFO: Created: latency-svc-jk4t6 Jul 5 10:48:44.077: INFO: Got endpoints: latency-svc-jk4t6 [1.079250775s] Jul 5 10:48:44.079: INFO: Created: latency-svc-6q84n Jul 5 10:48:44.092: INFO: Got endpoints: latency-svc-6q84n [1.035860185s] Jul 5 10:48:44.226: INFO: Created: latency-svc-2htc4 Jul 5 10:48:44.229: INFO: Got endpoints: latency-svc-2htc4 [1.111796086s] Jul 5 10:48:44.288: INFO: Created: latency-svc-8xpmd Jul 5 10:48:44.311: INFO: Got endpoints: latency-svc-8xpmd [1.103443568s] Jul 5 10:48:44.377: INFO: Created: latency-svc-kxg8m Jul 5 10:48:44.379: INFO: Got endpoints: latency-svc-kxg8m [1.111310573s] Jul 5 10:48:44.430: INFO: Created: latency-svc-mmgb7 Jul 5 10:48:44.453: INFO: Got endpoints: latency-svc-mmgb7 [1.094868441s] Jul 5 10:48:44.533: INFO: Created: latency-svc-686x2 Jul 5 10:48:44.535: INFO: Got endpoints: latency-svc-686x2 [1.153319865s] Jul 5 10:48:44.588: INFO: Created: latency-svc-7zf4f Jul 5 10:48:44.603: INFO: Got endpoints: latency-svc-7zf4f [1.172462637s] Jul 5 10:48:44.699: INFO: Created: latency-svc-nhfhj Jul 5 10:48:44.703: INFO: Got endpoints: latency-svc-nhfhj [1.158867464s] Jul 5 10:48:44.759: INFO: Created: latency-svc-qh6zb Jul 5 10:48:44.778: INFO: Got endpoints: latency-svc-qh6zb [1.178605379s] Jul 5 10:48:44.868: INFO: Created: latency-svc-7qhxk Jul 5 10:48:44.893: INFO: Got endpoints: latency-svc-7qhxk [1.176126928s] Jul 5 10:48:44.894: INFO: Created: latency-svc-jfjzk Jul 5 10:48:44.910: INFO: Got endpoints: latency-svc-jfjzk [1.172467708s] Jul 5 10:48:44.936: INFO: Created: latency-svc-8smmg Jul 5 10:48:44.946: INFO: Got endpoints: latency-svc-8smmg [1.166931399s] Jul 5 10:48:45.029: INFO: Created: latency-svc-4xkr2 Jul 5 10:48:45.032: INFO: Got endpoints: latency-svc-4xkr2 [1.144166818s] Jul 5 10:48:45.089: INFO: Created: latency-svc-d4hzd Jul 5 10:48:45.102: INFO: Got endpoints: latency-svc-d4hzd [1.165991631s] Jul 5 10:48:45.125: INFO: Created: latency-svc-gs9nx Jul 5 10:48:45.209: INFO: Got endpoints: latency-svc-gs9nx [1.131491132s] Jul 5 10:48:45.235: INFO: Created: latency-svc-pzcql Jul 5 10:48:45.262: INFO: Got endpoints: latency-svc-pzcql [1.170244045s] Jul 5 10:48:45.293: INFO: Created: latency-svc-rpnfr Jul 5 10:48:45.376: INFO: Got endpoints: latency-svc-rpnfr [1.14698749s] Jul 5 10:48:45.387: INFO: Created: latency-svc-dh5dm Jul 5 10:48:45.409: INFO: Got endpoints: latency-svc-dh5dm [1.098836271s] Jul 5 10:48:45.445: INFO: Created: latency-svc-lcbtq Jul 5 10:48:45.457: INFO: Got endpoints: latency-svc-lcbtq [1.078263123s] Jul 5 10:48:45.514: INFO: Created: latency-svc-lwwcr Jul 5 10:48:45.517: INFO: Got endpoints: latency-svc-lwwcr [1.064470819s] Jul 5 10:48:45.563: INFO: Created: latency-svc-zq8d9 Jul 5 10:48:45.590: INFO: Got endpoints: latency-svc-zq8d9 [1.054619775s] Jul 5 10:48:45.701: INFO: Created: latency-svc-qcnmp Jul 5 10:48:45.704: INFO: Got endpoints: latency-svc-qcnmp [1.100751223s] Jul 5 10:48:45.754: INFO: Created: latency-svc-nwdfc Jul 5 10:48:45.776: INFO: Got endpoints: latency-svc-nwdfc [1.073592345s] Jul 5 10:48:45.843: INFO: Created: latency-svc-bsrz5 Jul 5 10:48:45.845: INFO: Got endpoints: latency-svc-bsrz5 [1.067749276s] Jul 5 10:48:45.889: INFO: Created: latency-svc-4bfnt Jul 5 10:48:45.902: INFO: Got endpoints: latency-svc-4bfnt [1.008922036s] Jul 5 10:48:45.932: INFO: Created: latency-svc-nwd7r Jul 5 10:48:46.022: INFO: Got endpoints: latency-svc-nwd7r [1.112455897s] Jul 5 10:48:46.026: INFO: Created: latency-svc-9t5pr Jul 5 10:48:46.035: INFO: Got endpoints: latency-svc-9t5pr [1.088546738s] Jul 5 10:48:46.061: INFO: Created: latency-svc-9tslg Jul 5 10:48:46.077: INFO: Got endpoints: latency-svc-9tslg [1.045263988s] Jul 5 10:48:46.115: INFO: Created: latency-svc-r42mq Jul 5 10:48:46.160: INFO: Got endpoints: latency-svc-r42mq [1.058024077s] Jul 5 10:48:46.183: INFO: Created: latency-svc-jffpv Jul 5 10:48:46.198: INFO: Got endpoints: latency-svc-jffpv [989.174055ms] Jul 5 10:48:46.242: INFO: Created: latency-svc-g24ml Jul 5 10:48:46.258: INFO: Got endpoints: latency-svc-g24ml [995.636887ms] Jul 5 10:48:46.352: INFO: Created: latency-svc-7s52h Jul 5 10:48:46.360: INFO: Got endpoints: latency-svc-7s52h [983.754818ms] Jul 5 10:48:46.391: INFO: Created: latency-svc-p459k Jul 5 10:48:46.408: INFO: Got endpoints: latency-svc-p459k [998.836056ms] Jul 5 10:48:46.429: INFO: Created: latency-svc-g7nr8 Jul 5 10:48:46.444: INFO: Got endpoints: latency-svc-g7nr8 [986.77683ms] Jul 5 10:48:46.502: INFO: Created: latency-svc-4tpbt Jul 5 10:48:46.510: INFO: Got endpoints: latency-svc-4tpbt [992.924446ms] Jul 5 10:48:46.536: INFO: Created: latency-svc-qp6j2 Jul 5 10:48:46.547: INFO: Got endpoints: latency-svc-qp6j2 [956.508185ms] Jul 5 10:48:46.577: INFO: Created: latency-svc-sfwsv Jul 5 10:48:46.595: INFO: Got endpoints: latency-svc-sfwsv [891.234255ms] Jul 5 10:48:46.658: INFO: Created: latency-svc-cq4qc Jul 5 10:48:46.680: INFO: Got endpoints: latency-svc-cq4qc [904.119272ms] Jul 5 10:48:46.717: INFO: Created: latency-svc-579pz Jul 5 10:48:46.728: INFO: Got endpoints: latency-svc-579pz [882.300692ms] Jul 5 10:48:46.753: INFO: Created: latency-svc-cch9x Jul 5 10:48:46.807: INFO: Got endpoints: latency-svc-cch9x [904.647549ms] Jul 5 10:48:46.822: INFO: Created: latency-svc-czv52 Jul 5 10:48:46.852: INFO: Got endpoints: latency-svc-czv52 [829.194676ms] Jul 5 10:48:46.888: INFO: Created: latency-svc-ts9x2 Jul 5 10:48:46.969: INFO: Got endpoints: latency-svc-ts9x2 [933.982185ms] Jul 5 10:48:46.970: INFO: Created: latency-svc-x7tdq Jul 5 10:48:46.980: INFO: Got endpoints: latency-svc-x7tdq [903.106709ms] Jul 5 10:48:47.011: INFO: Created: latency-svc-nmsvc Jul 5 10:48:47.035: INFO: Got endpoints: latency-svc-nmsvc [874.251162ms] Jul 5 10:48:47.072: INFO: Created: latency-svc-tssqp Jul 5 10:48:47.143: INFO: Got endpoints: latency-svc-tssqp [944.87024ms] Jul 5 10:48:47.146: INFO: Created: latency-svc-qtj2v Jul 5 10:48:47.155: INFO: Got endpoints: latency-svc-qtj2v [897.156833ms] Jul 5 10:48:47.183: INFO: Created: latency-svc-k4272 Jul 5 10:48:47.198: INFO: Got endpoints: latency-svc-k4272 [837.854621ms] Jul 5 10:48:47.238: INFO: Created: latency-svc-nq2rb Jul 5 10:48:47.294: INFO: Got endpoints: latency-svc-nq2rb [885.277059ms] Jul 5 10:48:47.323: INFO: Created: latency-svc-hgmfl Jul 5 10:48:47.336: INFO: Got endpoints: latency-svc-hgmfl [891.576543ms] Jul 5 10:48:47.362: INFO: Created: latency-svc-62l4m Jul 5 10:48:47.378: INFO: Got endpoints: latency-svc-62l4m [867.76554ms] Jul 5 10:48:47.443: INFO: Created: latency-svc-ptv88 Jul 5 10:48:47.444: INFO: Got endpoints: latency-svc-ptv88 [897.80534ms] Jul 5 10:48:47.476: INFO: Created: latency-svc-p75tv Jul 5 10:48:47.508: INFO: Got endpoints: latency-svc-p75tv [913.038077ms] Jul 5 10:48:47.539: INFO: Created: latency-svc-zb84x Jul 5 10:48:47.603: INFO: Got endpoints: latency-svc-zb84x [923.001328ms] Jul 5 10:48:47.607: INFO: Created: latency-svc-jmthm Jul 5 10:48:47.619: INFO: Got endpoints: latency-svc-jmthm [891.109299ms] Jul 5 10:48:47.674: INFO: Created: latency-svc-qmq57 Jul 5 10:48:47.783: INFO: Got endpoints: latency-svc-qmq57 [976.023397ms] Jul 5 10:48:47.786: INFO: Created: latency-svc-z2dlz Jul 5 10:48:47.793: INFO: Got endpoints: latency-svc-z2dlz [941.190008ms] Jul 5 10:48:47.826: INFO: Created: latency-svc-pbdkd Jul 5 10:48:47.847: INFO: Got endpoints: latency-svc-pbdkd [878.339594ms] Jul 5 10:48:47.957: INFO: Created: latency-svc-7hxlc Jul 5 10:48:47.960: INFO: Got endpoints: latency-svc-7hxlc [980.315996ms] Jul 5 10:48:47.994: INFO: Created: latency-svc-q9r6r Jul 5 10:48:48.010: INFO: Got endpoints: latency-svc-q9r6r [975.212373ms] Jul 5 10:48:48.034: INFO: Created: latency-svc-bm5l9 Jul 5 10:48:48.052: INFO: Got endpoints: latency-svc-bm5l9 [909.483989ms] Jul 5 10:48:48.113: INFO: Created: latency-svc-zfn7n Jul 5 10:48:48.147: INFO: Got endpoints: latency-svc-zfn7n [992.23577ms] Jul 5 10:48:48.210: INFO: Created: latency-svc-8kml6 Jul 5 10:48:48.286: INFO: Got endpoints: latency-svc-8kml6 [1.088262488s] Jul 5 10:48:48.288: INFO: Created: latency-svc-zccgb Jul 5 10:48:48.305: INFO: Got endpoints: latency-svc-zccgb [1.011317161s] Jul 5 10:48:48.351: INFO: Created: latency-svc-t42kz Jul 5 10:48:48.378: INFO: Got endpoints: latency-svc-t42kz [1.04186519s] Jul 5 10:48:48.448: INFO: Created: latency-svc-bq4zz Jul 5 10:48:48.455: INFO: Got endpoints: latency-svc-bq4zz [1.07647544s] Jul 5 10:48:48.490: INFO: Created: latency-svc-42wvr Jul 5 10:48:48.504: INFO: Got endpoints: latency-svc-42wvr [1.058969749s] Jul 5 10:48:48.531: INFO: Created: latency-svc-qwhtt Jul 5 10:48:48.586: INFO: Got endpoints: latency-svc-qwhtt [1.077591681s] Jul 5 10:48:48.606: INFO: Created: latency-svc-zdxkg Jul 5 10:48:48.624: INFO: Got endpoints: latency-svc-zdxkg [1.020770198s] Jul 5 10:48:48.660: INFO: Created: latency-svc-9wgm4 Jul 5 10:48:48.747: INFO: Created: latency-svc-x7m6r Jul 5 10:48:48.777: INFO: Got endpoints: latency-svc-9wgm4 [1.158501527s] Jul 5 10:48:48.778: INFO: Got endpoints: latency-svc-x7m6r [994.484543ms] Jul 5 10:48:48.846: INFO: Created: latency-svc-pkxvn Jul 5 10:48:48.921: INFO: Got endpoints: latency-svc-pkxvn [1.128122444s] Jul 5 10:48:48.923: INFO: Created: latency-svc-qmzq9 Jul 5 10:48:48.930: INFO: Got endpoints: latency-svc-qmzq9 [1.083140399s] Jul 5 10:48:48.957: INFO: Created: latency-svc-mnxg6 Jul 5 10:48:48.970: INFO: Got endpoints: latency-svc-mnxg6 [1.009950261s] Jul 5 10:48:48.996: INFO: Created: latency-svc-5tzgj Jul 5 10:48:49.015: INFO: Got endpoints: latency-svc-5tzgj [1.005518003s] Jul 5 10:48:49.083: INFO: Created: latency-svc-6vnk6 Jul 5 10:48:49.085: INFO: Got endpoints: latency-svc-6vnk6 [1.032893342s] Jul 5 10:48:49.239: INFO: Created: latency-svc-45r5f Jul 5 10:48:49.255: INFO: Got endpoints: latency-svc-45r5f [1.108018971s] Jul 5 10:48:49.278: INFO: Created: latency-svc-pc5jc Jul 5 10:48:49.292: INFO: Got endpoints: latency-svc-pc5jc [1.005423014s] Jul 5 10:48:49.320: INFO: Created: latency-svc-gz9sc Jul 5 10:48:49.394: INFO: Got endpoints: latency-svc-gz9sc [1.088702924s] Jul 5 10:48:49.407: INFO: Created: latency-svc-8q4bn Jul 5 10:48:49.424: INFO: Got endpoints: latency-svc-8q4bn [1.045833436s] Jul 5 10:48:49.450: INFO: Created: latency-svc-847dr Jul 5 10:48:49.460: INFO: Got endpoints: latency-svc-847dr [1.005156765s] Jul 5 10:48:49.460: INFO: Latencies: [83.540407ms 119.506035ms 223.000574ms 269.196321ms 308.004484ms 413.941024ms 479.951755ms 551.480059ms 606.469589ms 685.320163ms 732.351076ms 774.558791ms 819.966505ms 829.194676ms 832.98988ms 837.854621ms 843.166326ms 843.445981ms 852.758963ms 859.846842ms 860.614728ms 867.76554ms 873.25793ms 873.634177ms 874.251162ms 876.893108ms 878.339594ms 879.362234ms 881.422762ms 882.300692ms 884.209056ms 885.277059ms 887.999937ms 891.109299ms 891.234255ms 891.576543ms 897.156833ms 897.80534ms 899.057574ms 901.630148ms 903.106709ms 904.119272ms 904.647549ms 908.425145ms 909.483989ms 911.250993ms 911.748899ms 913.038077ms 914.248011ms 915.800518ms 918.359421ms 923.001328ms 925.118778ms 930.159907ms 933.982185ms 934.632492ms 935.529526ms 939.24237ms 941.190008ms 944.87024ms 947.842778ms 951.220342ms 954.64276ms 956.508185ms 968.534398ms 974.870573ms 975.212373ms 976.023397ms 978.699695ms 980.315996ms 980.501348ms 983.754818ms 984.088716ms 984.424338ms 986.409041ms 986.77683ms 989.174055ms 989.536296ms 990.657514ms 992.23577ms 992.448842ms 992.924446ms 994.484543ms 995.636887ms 997.428519ms 998.225293ms 998.836056ms 1.001242968s 1.005156765s 1.005228308s 1.005423014s 1.005518003s 1.005525711s 1.006238858s 1.008922036s 1.009950261s 1.011317161s 1.014886271s 1.017451891s 1.01858144s 1.020621846s 1.020770198s 1.021177163s 1.021633898s 1.022626657s 1.026847702s 1.028623707s 1.029719808s 1.032221404s 1.032849786s 1.032893342s 1.035860185s 1.041051237s 1.041510605s 1.04186519s 1.042129029s 1.045263988s 1.045833436s 1.045882138s 1.046132148s 1.046479962s 1.046481373s 1.046596607s 1.054619775s 1.056482509s 1.056966513s 1.057746983s 1.058024077s 1.058969749s 1.061019553s 1.062571588s 1.064470819s 1.067749276s 1.069752984s 1.073592345s 1.07647544s 1.077591681s 1.078263123s 1.078618439s 1.079250775s 1.081442571s 1.083140399s 1.088262488s 1.088546738s 1.088702924s 1.091955342s 1.094868441s 1.098836271s 1.100751223s 1.103443568s 1.108018971s 1.111310573s 1.111796086s 1.112455897s 1.128122444s 1.131491132s 1.140712289s 1.144166818s 1.14698749s 1.153319865s 1.158501527s 1.158867464s 1.165991631s 1.166931399s 1.170244045s 1.172462637s 1.172467708s 1.176126928s 1.178605379s 1.180014375s 1.925070803s 2.019539556s 2.049038593s 2.071536774s 2.076254551s 2.112360105s 2.127554577s 2.149158079s 2.164225492s 2.183856726s 2.20154379s 2.215631312s 2.223708047s 2.226827113s 2.233454768s 2.32770765s 2.337922257s 2.340730171s 2.358089075s 2.358559366s 2.383356088s 2.384631366s 2.388439438s 2.395716226s 2.402324786s 2.406606634s 2.423839199s 2.43156146s 2.486667711s 2.499667657s] Jul 5 10:48:49.460: INFO: 50 %ile: 1.020621846s Jul 5 10:48:49.460: INFO: 90 %ile: 2.20154379s Jul 5 10:48:49.460: INFO: 99 %ile: 2.486667711s Jul 5 10:48:49.460: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:48:49.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-kzng2" for this suite. Jul 5 10:49:19.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:49:19.557: INFO: namespace: e2e-tests-svc-latency-kzng2, resource: bindings, ignored listing per whitelist Jul 5 10:49:19.571: INFO: namespace e2e-tests-svc-latency-kzng2 deletion completed in 30.104644512s • [SLOW TEST:50.418 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:49:19.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-s7gv6 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 5 10:49:19.654: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 5 10:49:45.846: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.48:8080/dial?request=hostName&protocol=http&host=10.244.1.47&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-s7gv6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 10:49:45.846: INFO: >>> kubeConfig: /root/.kube/config I0705 10:49:45.885795 6 log.go:172] (0xc0009b73f0) (0xc001071360) Create stream I0705 10:49:45.885840 6 log.go:172] (0xc0009b73f0) (0xc001071360) Stream added, broadcasting: 1 I0705 10:49:45.887950 6 log.go:172] (0xc0009b73f0) Reply frame received for 1 I0705 10:49:45.888011 6 log.go:172] (0xc0009b73f0) (0xc001071400) Create stream I0705 10:49:45.888027 6 log.go:172] (0xc0009b73f0) (0xc001071400) Stream added, broadcasting: 3 I0705 10:49:45.889337 6 log.go:172] (0xc0009b73f0) Reply frame received for 3 I0705 10:49:45.889388 6 log.go:172] (0xc0009b73f0) (0xc0010714a0) Create stream I0705 10:49:45.889421 6 log.go:172] (0xc0009b73f0) (0xc0010714a0) Stream added, broadcasting: 5 I0705 10:49:45.890702 6 log.go:172] (0xc0009b73f0) Reply frame received for 5 I0705 10:49:45.965318 6 log.go:172] (0xc0009b73f0) Data frame received for 3 I0705 10:49:45.965341 6 log.go:172] (0xc001071400) (3) Data frame handling I0705 10:49:45.965357 6 log.go:172] (0xc001071400) (3) Data frame sent I0705 10:49:45.966108 6 log.go:172] (0xc0009b73f0) Data frame received for 3 I0705 10:49:45.966153 6 log.go:172] (0xc001071400) (3) Data frame handling I0705 10:49:45.966198 6 log.go:172] (0xc0009b73f0) Data frame received for 5 I0705 10:49:45.966227 6 log.go:172] (0xc0010714a0) (5) Data frame handling I0705 10:49:45.967563 6 log.go:172] (0xc0009b73f0) Data frame received for 1 I0705 10:49:45.967602 6 log.go:172] (0xc001071360) (1) Data frame handling I0705 10:49:45.967621 6 log.go:172] (0xc001071360) (1) Data frame sent I0705 10:49:45.967634 6 log.go:172] (0xc0009b73f0) (0xc001071360) Stream removed, broadcasting: 1 I0705 10:49:45.967657 6 log.go:172] (0xc0009b73f0) Go away received I0705 10:49:45.967821 6 log.go:172] (0xc0009b73f0) (0xc001071360) Stream removed, broadcasting: 1 I0705 10:49:45.967837 6 log.go:172] (0xc0009b73f0) (0xc001071400) Stream removed, broadcasting: 3 I0705 10:49:45.967844 6 log.go:172] (0xc0009b73f0) (0xc0010714a0) Stream removed, broadcasting: 5 Jul 5 10:49:45.967: INFO: Waiting for endpoints: map[] Jul 5 10:49:45.971: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.48:8080/dial?request=hostName&protocol=http&host=10.244.2.109&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-s7gv6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 10:49:45.971: INFO: >>> kubeConfig: /root/.kube/config I0705 10:49:46.001266 6 log.go:172] (0xc0000eabb0) (0xc001b40fa0) Create stream I0705 10:49:46.001296 6 log.go:172] (0xc0000eabb0) (0xc001b40fa0) Stream added, broadcasting: 1 I0705 10:49:46.004012 6 log.go:172] (0xc0000eabb0) Reply frame received for 1 I0705 10:49:46.004066 6 log.go:172] (0xc0000eabb0) (0xc001b261e0) Create stream I0705 10:49:46.004104 6 log.go:172] (0xc0000eabb0) (0xc001b261e0) Stream added, broadcasting: 3 I0705 10:49:46.005023 6 log.go:172] (0xc0000eabb0) Reply frame received for 3 I0705 10:49:46.005066 6 log.go:172] (0xc0000eabb0) (0xc001ac7180) Create stream I0705 10:49:46.005078 6 log.go:172] (0xc0000eabb0) (0xc001ac7180) Stream added, broadcasting: 5 I0705 10:49:46.006039 6 log.go:172] (0xc0000eabb0) Reply frame received for 5 I0705 10:49:46.070130 6 log.go:172] (0xc0000eabb0) Data frame received for 3 I0705 10:49:46.070176 6 log.go:172] (0xc001b261e0) (3) Data frame handling I0705 10:49:46.070208 6 log.go:172] (0xc001b261e0) (3) Data frame sent I0705 10:49:46.070231 6 log.go:172] (0xc0000eabb0) Data frame received for 3 I0705 10:49:46.070251 6 log.go:172] (0xc001b261e0) (3) Data frame handling I0705 10:49:46.070346 6 log.go:172] (0xc0000eabb0) Data frame received for 5 I0705 10:49:46.070373 6 log.go:172] (0xc001ac7180) (5) Data frame handling I0705 10:49:46.072305 6 log.go:172] (0xc0000eabb0) Data frame received for 1 I0705 10:49:46.072331 6 log.go:172] (0xc001b40fa0) (1) Data frame handling I0705 10:49:46.072340 6 log.go:172] (0xc001b40fa0) (1) Data frame sent I0705 10:49:46.072350 6 log.go:172] (0xc0000eabb0) (0xc001b40fa0) Stream removed, broadcasting: 1 I0705 10:49:46.072377 6 log.go:172] (0xc0000eabb0) Go away received I0705 10:49:46.072469 6 log.go:172] (0xc0000eabb0) (0xc001b40fa0) Stream removed, broadcasting: 1 I0705 10:49:46.072491 6 log.go:172] (0xc0000eabb0) (0xc001b261e0) Stream removed, broadcasting: 3 I0705 10:49:46.072505 6 log.go:172] (0xc0000eabb0) (0xc001ac7180) Stream removed, broadcasting: 5 Jul 5 10:49:46.072: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:49:46.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-s7gv6" for this suite. Jul 5 10:50:08.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:50:08.113: INFO: namespace: e2e-tests-pod-network-test-s7gv6, resource: bindings, ignored listing per whitelist Jul 5 10:50:08.170: INFO: namespace e2e-tests-pod-network-test-s7gv6 deletion completed in 22.092632665s • [SLOW TEST:48.598 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:50:08.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-4ae376c6-bead-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume configMaps Jul 5 10:50:08.346: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ae54b25-bead-11ea-9e48-0242ac110017" in namespace "e2e-tests-configmap-n9cf6" to be "success or failure" Jul 5 10:50:08.349: INFO: Pod "pod-configmaps-4ae54b25-bead-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.749565ms Jul 5 10:50:10.353: INFO: Pod "pod-configmaps-4ae54b25-bead-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006853572s Jul 5 10:50:12.357: INFO: Pod "pod-configmaps-4ae54b25-bead-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.011157518s Jul 5 10:50:14.362: INFO: Pod "pod-configmaps-4ae54b25-bead-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015897198s STEP: Saw pod success Jul 5 10:50:14.362: INFO: Pod "pod-configmaps-4ae54b25-bead-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 10:50:14.365: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-4ae54b25-bead-11ea-9e48-0242ac110017 container configmap-volume-test: STEP: delete the pod Jul 5 10:50:14.388: INFO: Waiting for pod pod-configmaps-4ae54b25-bead-11ea-9e48-0242ac110017 to disappear Jul 5 10:50:14.391: INFO: Pod pod-configmaps-4ae54b25-bead-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:50:14.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-n9cf6" for this suite. Jul 5 10:50:20.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:50:20.434: INFO: namespace: e2e-tests-configmap-n9cf6, resource: bindings, ignored listing per whitelist Jul 5 10:50:20.467: INFO: namespace e2e-tests-configmap-n9cf6 deletion completed in 6.071880821s • [SLOW TEST:12.297 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:50:20.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 5 10:50:20.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-cvtbl' Jul 5 10:50:24.653: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 5 10:50:24.653: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jul 5 10:50:24.656: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jul 5 10:50:24.666: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jul 5 10:50:24.674: INFO: scanned /root for discovery docs: Jul 5 10:50:24.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-cvtbl' Jul 5 10:50:41.698: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 5 10:50:41.698: INFO: stdout: "Created e2e-test-nginx-rc-dfc57474df5726fa029113564875511a\nScaling up e2e-test-nginx-rc-dfc57474df5726fa029113564875511a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-dfc57474df5726fa029113564875511a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-dfc57474df5726fa029113564875511a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jul 5 10:50:41.698: INFO: stdout: "Created e2e-test-nginx-rc-dfc57474df5726fa029113564875511a\nScaling up e2e-test-nginx-rc-dfc57474df5726fa029113564875511a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-dfc57474df5726fa029113564875511a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-dfc57474df5726fa029113564875511a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jul 5 10:50:41.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cvtbl' Jul 5 10:50:41.849: INFO: stderr: "" Jul 5 10:50:41.849: INFO: stdout: "e2e-test-nginx-rc-dfc57474df5726fa029113564875511a-ltm5s e2e-test-nginx-rc-g7hpr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jul 5 10:50:46.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cvtbl' Jul 5 10:50:46.956: INFO: stderr: "" Jul 5 10:50:46.956: INFO: stdout: "e2e-test-nginx-rc-dfc57474df5726fa029113564875511a-ltm5s " Jul 5 10:50:46.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-dfc57474df5726fa029113564875511a-ltm5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cvtbl' Jul 5 10:50:47.049: INFO: stderr: "" Jul 5 10:50:47.049: INFO: stdout: "true" Jul 5 10:50:47.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-dfc57474df5726fa029113564875511a-ltm5s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cvtbl' Jul 5 10:50:47.135: INFO: stderr: "" Jul 5 10:50:47.135: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jul 5 10:50:47.135: INFO: e2e-test-nginx-rc-dfc57474df5726fa029113564875511a-ltm5s is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jul 5 10:50:47.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cvtbl' Jul 5 10:50:47.251: INFO: stderr: "" Jul 5 10:50:47.251: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:50:47.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cvtbl" for this suite. Jul 5 10:51:09.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:51:09.338: INFO: namespace: e2e-tests-kubectl-cvtbl, resource: bindings, ignored listing per whitelist Jul 5 10:51:09.377: INFO: namespace e2e-tests-kubectl-cvtbl deletion completed in 22.121996023s • [SLOW TEST:48.909 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:51:09.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 5 10:51:19.630: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 5 10:51:19.639: INFO: Pod pod-with-prestop-http-hook still exists Jul 5 10:51:21.639: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 5 10:51:21.643: INFO: Pod pod-with-prestop-http-hook still exists Jul 5 10:51:23.639: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 5 10:51:23.643: INFO: Pod pod-with-prestop-http-hook still exists Jul 5 10:51:25.639: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 5 10:51:25.644: INFO: Pod pod-with-prestop-http-hook still exists Jul 5 10:51:27.639: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 5 10:51:27.644: INFO: Pod pod-with-prestop-http-hook still exists Jul 5 10:51:29.639: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 5 10:51:29.643: INFO: Pod pod-with-prestop-http-hook still exists Jul 5 10:51:31.639: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 5 10:51:31.643: INFO: Pod pod-with-prestop-http-hook still exists Jul 5 10:51:33.639: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 5 10:51:33.656: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:51:33.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-xfj9s" for this suite. Jul 5 10:51:55.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:51:55.715: INFO: namespace: e2e-tests-container-lifecycle-hook-xfj9s, resource: bindings, ignored listing per whitelist Jul 5 10:51:55.768: INFO: namespace e2e-tests-container-lifecycle-hook-xfj9s deletion completed in 22.101309952s • [SLOW TEST:46.391 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:51:55.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jul 5 10:52:00.113: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-8b14c9e9-bead-11ea-9e48-0242ac110017", GenerateName:"", Namespace:"e2e-tests-pods-fcx8l", SelfLink:"/api/v1/namespaces/e2e-tests-pods-fcx8l/pods/pod-submit-remove-8b14c9e9-bead-11ea-9e48-0242ac110017", UID:"8b193b35-bead-11ea-a300-0242ac110004", ResourceVersion:"219717", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729543116, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"23661672", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ph8m8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001bde200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ph8m8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0010d46b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021563c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010d4730)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010d4750)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0010d4758), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0010d475c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729543116, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729543119, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729543119, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729543116, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.2.112", StartTime:(*v1.Time)(0xc000cf9a20), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000cf9a60), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://b87acbf11217a9973627c0350347094d5eb8914c87e358759ffa31f5efe6565c"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jul 5 10:52:05.124: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:52:05.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fcx8l" for this suite. Jul 5 10:52:11.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:52:11.221: INFO: namespace: e2e-tests-pods-fcx8l, resource: bindings, ignored listing per whitelist Jul 5 10:52:11.284: INFO: namespace e2e-tests-pods-fcx8l deletion completed in 6.153030404s • [SLOW TEST:15.516 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:52:11.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 5 10:52:11.796: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 5 10:52:11.811: INFO: Waiting for terminating namespaces to be deleted... Jul 5 10:52:11.814: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jul 5 10:52:11.818: INFO: kindnet-mcn92 from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded) Jul 5 10:52:11.818: INFO: Container kindnet-cni ready: true, restart count 0 Jul 5 10:52:11.818: INFO: kube-proxy-cqbm8 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded) Jul 5 10:52:11.818: INFO: Container kube-proxy ready: true, restart count 0 Jul 5 10:52:11.818: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jul 5 10:52:11.824: INFO: local-path-provisioner-674595c7-cvgpb from local-path-storage started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded) Jul 5 10:52:11.824: INFO: Container local-path-provisioner ready: true, restart count 0 Jul 5 10:52:11.824: INFO: coredns-54ff9cd656-mgg2q from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded) Jul 5 10:52:11.824: INFO: Container coredns ready: true, restart count 0 Jul 5 10:52:11.824: INFO: coredns-54ff9cd656-l7q92 from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded) Jul 5 10:52:11.824: INFO: Container coredns ready: true, restart count 0 Jul 5 10:52:11.824: INFO: kube-proxy-52vr2 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded) Jul 5 10:52:11.824: INFO: Container kube-proxy ready: true, restart count 0 Jul 5 10:52:11.824: INFO: kindnet-rll2b from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded) Jul 5 10:52:11.824: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161ed5be6457aa76], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:52:12.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-4ddsp" for this suite. Jul 5 10:52:18.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:52:18.989: INFO: namespace: e2e-tests-sched-pred-4ddsp, resource: bindings, ignored listing per whitelist Jul 5 10:52:19.014: INFO: namespace e2e-tests-sched-pred-4ddsp deletion completed in 6.14859818s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.730 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:52:19.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 5 10:52:19.114: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98d74c80-bead-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-xtq25" to be "success or failure" Jul 5 10:52:19.127: INFO: Pod "downwardapi-volume-98d74c80-bead-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.599439ms Jul 5 10:52:21.253: INFO: Pod "downwardapi-volume-98d74c80-bead-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13845446s Jul 5 10:52:23.256: INFO: Pod "downwardapi-volume-98d74c80-bead-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141900483s STEP: Saw pod success Jul 5 10:52:23.256: INFO: Pod "downwardapi-volume-98d74c80-bead-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 10:52:23.259: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-98d74c80-bead-11ea-9e48-0242ac110017 container client-container: STEP: delete the pod Jul 5 10:52:23.411: INFO: Waiting for pod downwardapi-volume-98d74c80-bead-11ea-9e48-0242ac110017 to disappear Jul 5 10:52:23.448: INFO: Pod downwardapi-volume-98d74c80-bead-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:52:23.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xtq25" for this suite. Jul 5 10:52:29.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:52:29.521: INFO: namespace: e2e-tests-downward-api-xtq25, resource: bindings, ignored listing per whitelist Jul 5 10:52:29.559: INFO: namespace e2e-tests-downward-api-xtq25 deletion completed in 6.106029654s • [SLOW TEST:10.545 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:52:29.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 5 10:52:29.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jul 5 10:52:29.748: INFO: stderr: "" Jul 5 10:52:29.748: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-05T09:49:20Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jul 5 10:52:29.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9h7t6' Jul 5 10:52:30.044: INFO: stderr: "" Jul 5 10:52:30.044: INFO: stdout: "replicationcontroller/redis-master created\n" Jul 5 10:52:30.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9h7t6' Jul 5 10:52:30.483: INFO: stderr: "" Jul 5 10:52:30.483: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jul 5 10:52:31.490: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:52:31.490: INFO: Found 0 / 1 Jul 5 10:52:32.983: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:52:32.983: INFO: Found 0 / 1 Jul 5 10:52:33.552: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:52:33.552: INFO: Found 0 / 1 Jul 5 10:52:34.516: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:52:34.516: INFO: Found 0 / 1 Jul 5 10:52:35.534: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:52:35.534: INFO: Found 0 / 1 Jul 5 10:52:36.504: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:52:36.504: INFO: Found 1 / 1 Jul 5 10:52:36.504: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 5 10:52:36.507: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:52:36.507: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 5 10:52:36.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-4q9t8 --namespace=e2e-tests-kubectl-9h7t6' Jul 5 10:52:36.619: INFO: stderr: "" Jul 5 10:52:36.619: INFO: stdout: "Name: redis-master-4q9t8\nNamespace: e2e-tests-kubectl-9h7t6\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.2\nStart Time: Sun, 05 Jul 2020 10:52:30 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.52\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://1ffcb9be780a5590676b1da7874168fc93c0e09059de3dd92a8ba63f5914c222\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 05 Jul 2020 10:52:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-h7znm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-h7znm:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-h7znm\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned e2e-tests-kubectl-9h7t6/redis-master-4q9t8 to hunter-worker2\n Normal Pulled 4s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 2s kubelet, hunter-worker2 Started container\n" Jul 5 10:52:36.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-9h7t6' Jul 5 10:52:36.738: INFO: stderr: "" Jul 5 10:52:36.738: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-9h7t6\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: redis-master-4q9t8\n" Jul 5 10:52:36.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-9h7t6' Jul 5 10:52:36.847: INFO: stderr: "" Jul 5 10:52:36.847: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-9h7t6\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.112.172\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.52:6379\nSession Affinity: None\nEvents: \n" Jul 5 10:52:36.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jul 5 10:52:36.986: INFO: stderr: "" Jul 5 10:52:36.986: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 04 Jul 2020 07:47:23 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 05 Jul 2020 10:52:35 +0000 Sat, 04 Jul 2020 07:47:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 05 Jul 2020 10:52:35 +0000 Sat, 04 Jul 2020 07:47:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 05 Jul 2020 10:52:35 +0000 Sat, 04 Jul 2020 07:47:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 05 Jul 2020 10:52:35 +0000 Sat, 04 Jul 2020 07:48:14 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.4\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 268105a9121e48d584b7113fd8a9e3a1\n System UUID: 0e585f84-1906-441c-90cd-c4ab5eda753d\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27h\n kube-system kindnet-9q4t6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 27h\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 27h\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 27h\n kube-system kube-proxy-dmvsw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27h\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 27h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jul 5 10:52:36.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-9h7t6' Jul 5 10:52:37.089: INFO: stderr: "" Jul 5 10:52:37.089: INFO: stdout: "Name: e2e-tests-kubectl-9h7t6\nLabels: e2e-framework=kubectl\n e2e-run=d2f9b78f-beac-11ea-9e48-0242ac110017\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:52:37.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9h7t6" for this suite. Jul 5 10:53:05.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:53:05.322: INFO: namespace: e2e-tests-kubectl-9h7t6, resource: bindings, ignored listing per whitelist Jul 5 10:53:05.326: INFO: namespace e2e-tests-kubectl-9h7t6 deletion completed in 28.234365616s • [SLOW TEST:35.767 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:53:05.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jul 5 10:53:05.441: INFO: namespace e2e-tests-kubectl-7ns7m Jul 5 10:53:05.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7ns7m' Jul 5 10:53:05.707: INFO: stderr: "" Jul 5 10:53:05.707: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 5 10:53:06.726: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:53:06.726: INFO: Found 0 / 1 Jul 5 10:53:07.712: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:53:07.712: INFO: Found 0 / 1 Jul 5 10:53:08.712: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:53:08.712: INFO: Found 0 / 1 Jul 5 10:53:09.750: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:53:09.750: INFO: Found 1 / 1 Jul 5 10:53:09.750: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 5 10:53:09.753: INFO: Selector matched 1 pods for map[app:redis] Jul 5 10:53:09.753: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 5 10:53:09.753: INFO: wait on redis-master startup in e2e-tests-kubectl-7ns7m Jul 5 10:53:09.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lzfpl redis-master --namespace=e2e-tests-kubectl-7ns7m' Jul 5 10:53:09.896: INFO: stderr: "" Jul 5 10:53:09.896: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 05 Jul 10:53:08.426 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jul 10:53:08.426 # Server started, Redis version 3.2.12\n1:M 05 Jul 10:53:08.426 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jul 10:53:08.426 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jul 5 10:53:09.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-7ns7m' Jul 5 10:53:10.040: INFO: stderr: "" Jul 5 10:53:10.040: INFO: stdout: "service/rm2 exposed\n" Jul 5 10:53:10.057: INFO: Service rm2 in namespace e2e-tests-kubectl-7ns7m found. STEP: exposing service Jul 5 10:53:12.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-7ns7m' Jul 5 10:53:12.207: INFO: stderr: "" Jul 5 10:53:12.207: INFO: stdout: "service/rm3 exposed\n" Jul 5 10:53:12.216: INFO: Service rm3 in namespace e2e-tests-kubectl-7ns7m found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:53:14.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7ns7m" for this suite. Jul 5 10:53:38.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:53:38.326: INFO: namespace: e2e-tests-kubectl-7ns7m, resource: bindings, ignored listing per whitelist Jul 5 10:53:38.353: INFO: namespace e2e-tests-kubectl-7ns7m deletion completed in 24.125170482s • [SLOW TEST:33.027 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:53:38.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-wcjlz.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-wcjlz.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wcjlz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-wcjlz.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-wcjlz.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wcjlz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 5 10:53:46.623: INFO: DNS probes using e2e-tests-dns-wcjlz/dns-test-c8257d8a-bead-11ea-9e48-0242ac110017 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:53:46.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-wcjlz" for this suite. Jul 5 10:53:53.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:53:53.102: INFO: namespace: e2e-tests-dns-wcjlz, resource: bindings, ignored listing per whitelist Jul 5 10:53:53.160: INFO: namespace e2e-tests-dns-wcjlz deletion completed in 6.173536485s • [SLOW TEST:14.807 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:53:53.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wc9q7 Jul 5 10:53:57.284: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wc9q7 STEP: checking the pod's current state and verifying that restartCount is present Jul 5 10:53:57.287: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:57:58.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wc9q7" for this suite. Jul 5 10:58:04.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:58:04.442: INFO: namespace: e2e-tests-container-probe-wc9q7, resource: bindings, ignored listing per whitelist Jul 5 10:58:04.485: INFO: namespace e2e-tests-container-probe-wc9q7 deletion completed in 6.160919684s • [SLOW TEST:251.324 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:58:04.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jul 5 10:58:04.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:04.910: INFO: stderr: "" Jul 5 10:58:04.910: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 5 10:58:04.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:05.037: INFO: stderr: "" Jul 5 10:58:05.037: INFO: stdout: "update-demo-nautilus-8zhw8 update-demo-nautilus-ckdml " Jul 5 10:58:05.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zhw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:05.124: INFO: stderr: "" Jul 5 10:58:05.124: INFO: stdout: "" Jul 5 10:58:05.124: INFO: update-demo-nautilus-8zhw8 is created but not running Jul 5 10:58:10.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:10.232: INFO: stderr: "" Jul 5 10:58:10.232: INFO: stdout: "update-demo-nautilus-8zhw8 update-demo-nautilus-ckdml " Jul 5 10:58:10.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zhw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:10.333: INFO: stderr: "" Jul 5 10:58:10.333: INFO: stdout: "true" Jul 5 10:58:10.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zhw8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:10.442: INFO: stderr: "" Jul 5 10:58:10.442: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 5 10:58:10.442: INFO: validating pod update-demo-nautilus-8zhw8 Jul 5 10:58:10.447: INFO: got data: { "image": "nautilus.jpg" } Jul 5 10:58:10.447: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 5 10:58:10.447: INFO: update-demo-nautilus-8zhw8 is verified up and running Jul 5 10:58:10.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ckdml -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:10.541: INFO: stderr: "" Jul 5 10:58:10.541: INFO: stdout: "true" Jul 5 10:58:10.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ckdml -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:10.645: INFO: stderr: "" Jul 5 10:58:10.646: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 5 10:58:10.646: INFO: validating pod update-demo-nautilus-ckdml Jul 5 10:58:10.650: INFO: got data: { "image": "nautilus.jpg" } Jul 5 10:58:10.650: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 5 10:58:10.650: INFO: update-demo-nautilus-ckdml is verified up and running STEP: using delete to clean up resources Jul 5 10:58:10.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:10.767: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 5 10:58:10.767: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 5 10:58:10.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xv6tz' Jul 5 10:58:10.875: INFO: stderr: "No resources found.\n" Jul 5 10:58:10.875: INFO: stdout: "" Jul 5 10:58:10.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-xv6tz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 5 10:58:11.053: INFO: stderr: "" Jul 5 10:58:11.054: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:58:11.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xv6tz" for this suite. Jul 5 10:58:33.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:58:33.415: INFO: namespace: e2e-tests-kubectl-xv6tz, resource: bindings, ignored listing per whitelist Jul 5 10:58:33.452: INFO: namespace e2e-tests-kubectl-xv6tz deletion completed in 22.236774849s • [SLOW TEST:28.967 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:58:33.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-2m5j2/configmap-test-7807001e-beae-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume configMaps Jul 5 10:58:33.589: INFO: Waiting up to 5m0s for pod "pod-configmaps-780ada4b-beae-11ea-9e48-0242ac110017" in namespace "e2e-tests-configmap-2m5j2" to be "success or failure" Jul 5 10:58:33.607: INFO: Pod "pod-configmaps-780ada4b-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.222739ms Jul 5 10:58:35.611: INFO: Pod "pod-configmaps-780ada4b-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021836781s Jul 5 10:58:37.615: INFO: Pod "pod-configmaps-780ada4b-beae-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.025924232s Jul 5 10:58:39.619: INFO: Pod "pod-configmaps-780ada4b-beae-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029907677s STEP: Saw pod success Jul 5 10:58:39.619: INFO: Pod "pod-configmaps-780ada4b-beae-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 10:58:39.622: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-780ada4b-beae-11ea-9e48-0242ac110017 container env-test: STEP: delete the pod Jul 5 10:58:39.664: INFO: Waiting for pod pod-configmaps-780ada4b-beae-11ea-9e48-0242ac110017 to disappear Jul 5 10:58:39.704: INFO: Pod pod-configmaps-780ada4b-beae-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:58:39.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2m5j2" for this suite. Jul 5 10:58:45.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:58:45.734: INFO: namespace: e2e-tests-configmap-2m5j2, resource: bindings, ignored listing per whitelist Jul 5 10:58:45.791: INFO: namespace e2e-tests-configmap-2m5j2 deletion completed in 6.082650691s • [SLOW TEST:12.339 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:58:45.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 5 10:58:52.978: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:58:54.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-8g4s6" for this suite. Jul 5 10:59:17.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:59:17.360: INFO: namespace: e2e-tests-replicaset-8g4s6, resource: bindings, ignored listing per whitelist Jul 5 10:59:17.420: INFO: namespace e2e-tests-replicaset-8g4s6 deletion completed in 22.86684495s • [SLOW TEST:31.629 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:59:17.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-vgdh8/configmap-test-9239f973-beae-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume configMaps Jul 5 10:59:17.581: INFO: Waiting up to 5m0s for pod "pod-configmaps-92441a2c-beae-11ea-9e48-0242ac110017" in namespace "e2e-tests-configmap-vgdh8" to be "success or failure" Jul 5 10:59:17.598: INFO: Pod "pod-configmaps-92441a2c-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.073486ms Jul 5 10:59:19.602: INFO: Pod "pod-configmaps-92441a2c-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021215077s Jul 5 10:59:21.607: INFO: Pod "pod-configmaps-92441a2c-beae-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025811541s STEP: Saw pod success Jul 5 10:59:21.607: INFO: Pod "pod-configmaps-92441a2c-beae-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 10:59:21.610: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-92441a2c-beae-11ea-9e48-0242ac110017 container env-test: STEP: delete the pod Jul 5 10:59:21.631: INFO: Waiting for pod pod-configmaps-92441a2c-beae-11ea-9e48-0242ac110017 to disappear Jul 5 10:59:21.669: INFO: Pod pod-configmaps-92441a2c-beae-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:59:21.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vgdh8" for this suite. Jul 5 10:59:27.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:59:27.717: INFO: namespace: e2e-tests-configmap-vgdh8, resource: bindings, ignored listing per whitelist Jul 5 10:59:27.770: INFO: namespace e2e-tests-configmap-vgdh8 deletion completed in 6.097142496s • [SLOW TEST:10.350 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:59:27.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-987144b9-beae-11ea-9e48-0242ac110017 Jul 5 10:59:28.000: INFO: Pod name my-hostname-basic-987144b9-beae-11ea-9e48-0242ac110017: Found 0 pods out of 1 Jul 5 10:59:33.008: INFO: Pod name my-hostname-basic-987144b9-beae-11ea-9e48-0242ac110017: Found 1 pods out of 1 Jul 5 10:59:33.008: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-987144b9-beae-11ea-9e48-0242ac110017" are running Jul 5 10:59:33.015: INFO: Pod "my-hostname-basic-987144b9-beae-11ea-9e48-0242ac110017-cfwdq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 10:59:28 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 10:59:30 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 10:59:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 10:59:28 +0000 UTC Reason: Message:}]) Jul 5 10:59:33.015: INFO: Trying to dial the pod Jul 5 10:59:38.027: INFO: Controller my-hostname-basic-987144b9-beae-11ea-9e48-0242ac110017: Got expected result from replica 1 [my-hostname-basic-987144b9-beae-11ea-9e48-0242ac110017-cfwdq]: "my-hostname-basic-987144b9-beae-11ea-9e48-0242ac110017-cfwdq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:59:38.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-s62v2" for this suite. Jul 5 10:59:44.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:59:44.188: INFO: namespace: e2e-tests-replication-controller-s62v2, resource: bindings, ignored listing per whitelist Jul 5 10:59:44.202: INFO: namespace e2e-tests-replication-controller-s62v2 deletion completed in 6.169988054s • [SLOW TEST:16.432 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:59:44.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-a2388b6c-beae-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume secrets Jul 5 10:59:44.359: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a23904ad-beae-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-x7vbm" to be "success or failure" Jul 5 10:59:44.382: INFO: Pod "pod-projected-secrets-a23904ad-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.585707ms Jul 5 10:59:46.670: INFO: Pod "pod-projected-secrets-a23904ad-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310651275s Jul 5 10:59:48.674: INFO: Pod "pod-projected-secrets-a23904ad-beae-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.314731976s Jul 5 10:59:50.679: INFO: Pod "pod-projected-secrets-a23904ad-beae-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.31936235s STEP: Saw pod success Jul 5 10:59:50.679: INFO: Pod "pod-projected-secrets-a23904ad-beae-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 10:59:50.682: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-a23904ad-beae-11ea-9e48-0242ac110017 container projected-secret-volume-test: STEP: delete the pod Jul 5 10:59:50.857: INFO: Waiting for pod pod-projected-secrets-a23904ad-beae-11ea-9e48-0242ac110017 to disappear Jul 5 10:59:50.948: INFO: Pod pod-projected-secrets-a23904ad-beae-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 10:59:50.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x7vbm" for this suite. Jul 5 10:59:57.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 10:59:57.109: INFO: namespace: e2e-tests-projected-x7vbm, resource: bindings, ignored listing per whitelist Jul 5 10:59:57.156: INFO: namespace e2e-tests-projected-x7vbm deletion completed in 6.202826004s • [SLOW TEST:12.954 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 10:59:57.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 5 10:59:57.268: INFO: Creating ReplicaSet my-hostname-basic-a9ecfdf3-beae-11ea-9e48-0242ac110017 Jul 5 10:59:57.289: INFO: Pod name my-hostname-basic-a9ecfdf3-beae-11ea-9e48-0242ac110017: Found 0 pods out of 1 Jul 5 11:00:02.294: INFO: Pod name my-hostname-basic-a9ecfdf3-beae-11ea-9e48-0242ac110017: Found 1 pods out of 1 Jul 5 11:00:02.294: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a9ecfdf3-beae-11ea-9e48-0242ac110017" is running Jul 5 11:00:02.296: INFO: Pod "my-hostname-basic-a9ecfdf3-beae-11ea-9e48-0242ac110017-77tp6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 10:59:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 10:59:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 10:59:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 10:59:57 +0000 UTC Reason: Message:}]) Jul 5 11:00:02.296: INFO: Trying to dial the pod Jul 5 11:00:07.308: INFO: Controller my-hostname-basic-a9ecfdf3-beae-11ea-9e48-0242ac110017: Got expected result from replica 1 [my-hostname-basic-a9ecfdf3-beae-11ea-9e48-0242ac110017-77tp6]: "my-hostname-basic-a9ecfdf3-beae-11ea-9e48-0242ac110017-77tp6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:00:07.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-p5mn2" for this suite. Jul 5 11:00:13.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:00:13.381: INFO: namespace: e2e-tests-replicaset-p5mn2, resource: bindings, ignored listing per whitelist Jul 5 11:00:13.454: INFO: namespace e2e-tests-replicaset-p5mn2 deletion completed in 6.142029814s • [SLOW TEST:16.298 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:00:13.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-8f94 STEP: Creating a pod to test atomic-volume-subpath Jul 5 11:00:13.685: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8f94" in namespace "e2e-tests-subpath-97wcf" to be "success or failure" Jul 5 11:00:13.693: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Pending", Reason="", readiness=false. Elapsed: 8.445031ms Jul 5 11:00:15.698: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012825911s Jul 5 11:00:17.702: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017271029s Jul 5 11:00:19.706: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021462247s Jul 5 11:00:21.711: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Running", Reason="", readiness=false. Elapsed: 8.025768862s Jul 5 11:00:23.715: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Running", Reason="", readiness=false. Elapsed: 10.030057912s Jul 5 11:00:25.719: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Running", Reason="", readiness=false. Elapsed: 12.034482053s Jul 5 11:00:27.723: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Running", Reason="", readiness=false. Elapsed: 14.038525731s Jul 5 11:00:29.727: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Running", Reason="", readiness=false. Elapsed: 16.042433615s Jul 5 11:00:31.732: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Running", Reason="", readiness=false. Elapsed: 18.046818127s Jul 5 11:00:33.736: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Running", Reason="", readiness=false. Elapsed: 20.05123049s Jul 5 11:00:35.740: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Running", Reason="", readiness=false. Elapsed: 22.055418539s Jul 5 11:00:37.744: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Running", Reason="", readiness=false. Elapsed: 24.059675405s Jul 5 11:00:39.749: INFO: Pod "pod-subpath-test-configmap-8f94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.064384347s STEP: Saw pod success Jul 5 11:00:39.749: INFO: Pod "pod-subpath-test-configmap-8f94" satisfied condition "success or failure" Jul 5 11:00:39.752: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-8f94 container test-container-subpath-configmap-8f94: STEP: delete the pod Jul 5 11:00:39.774: INFO: Waiting for pod pod-subpath-test-configmap-8f94 to disappear Jul 5 11:00:39.778: INFO: Pod pod-subpath-test-configmap-8f94 no longer exists STEP: Deleting pod pod-subpath-test-configmap-8f94 Jul 5 11:00:39.778: INFO: Deleting pod "pod-subpath-test-configmap-8f94" in namespace "e2e-tests-subpath-97wcf" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:00:39.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-97wcf" for this suite. Jul 5 11:00:45.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:00:45.868: INFO: namespace: e2e-tests-subpath-97wcf, resource: bindings, ignored listing per whitelist Jul 5 11:00:45.913: INFO: namespace e2e-tests-subpath-97wcf deletion completed in 6.130456777s • [SLOW TEST:32.459 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:00:45.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jul 5 11:00:45.983: INFO: Waiting up to 5m0s for pod "pod-c6f55218-beae-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-dlfqx" to be "success or failure" Jul 5 11:00:46.002: INFO: Pod "pod-c6f55218-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.477662ms Jul 5 11:00:48.006: INFO: Pod "pod-c6f55218-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022424267s Jul 5 11:00:50.009: INFO: Pod "pod-c6f55218-beae-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026002559s STEP: Saw pod success Jul 5 11:00:50.010: INFO: Pod "pod-c6f55218-beae-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:00:50.012: INFO: Trying to get logs from node hunter-worker pod pod-c6f55218-beae-11ea-9e48-0242ac110017 container test-container: STEP: delete the pod Jul 5 11:00:50.089: INFO: Waiting for pod pod-c6f55218-beae-11ea-9e48-0242ac110017 to disappear Jul 5 11:00:50.092: INFO: Pod pod-c6f55218-beae-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:00:50.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dlfqx" for this suite. Jul 5 11:00:56.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:00:56.159: INFO: namespace: e2e-tests-emptydir-dlfqx, resource: bindings, ignored listing per whitelist Jul 5 11:00:56.191: INFO: namespace e2e-tests-emptydir-dlfqx deletion completed in 6.095256192s • [SLOW TEST:10.277 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:00:56.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 5 11:00:56.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd1dfb77-beae-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-fsdhs" to be "success or failure" Jul 5 11:00:56.333: INFO: Pod "downwardapi-volume-cd1dfb77-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239206ms Jul 5 11:00:58.336: INFO: Pod "downwardapi-volume-cd1dfb77-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007954947s Jul 5 11:01:00.341: INFO: Pod "downwardapi-volume-cd1dfb77-beae-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012679722s STEP: Saw pod success Jul 5 11:01:00.341: INFO: Pod "downwardapi-volume-cd1dfb77-beae-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:01:00.344: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cd1dfb77-beae-11ea-9e48-0242ac110017 container client-container: STEP: delete the pod Jul 5 11:01:00.699: INFO: Waiting for pod downwardapi-volume-cd1dfb77-beae-11ea-9e48-0242ac110017 to disappear Jul 5 11:01:00.862: INFO: Pod downwardapi-volume-cd1dfb77-beae-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:01:00.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fsdhs" for this suite. Jul 5 11:01:08.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:01:08.940: INFO: namespace: e2e-tests-downward-api-fsdhs, resource: bindings, ignored listing per whitelist Jul 5 11:01:09.341: INFO: namespace e2e-tests-downward-api-fsdhs deletion completed in 8.47532198s • [SLOW TEST:13.150 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:01:09.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 5 11:01:10.026: INFO: Waiting up to 5m0s for pod "downward-api-d5469ace-beae-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-tgzcb" to be "success or failure" Jul 5 11:01:10.348: INFO: Pod "downward-api-d5469ace-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 321.192785ms Jul 5 11:01:12.504: INFO: Pod "downward-api-d5469ace-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477221582s Jul 5 11:01:14.508: INFO: Pod "downward-api-d5469ace-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481408556s Jul 5 11:01:16.511: INFO: Pod "downward-api-d5469ace-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.484646584s Jul 5 11:01:18.515: INFO: Pod "downward-api-d5469ace-beae-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.488258531s STEP: Saw pod success Jul 5 11:01:18.515: INFO: Pod "downward-api-d5469ace-beae-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:01:18.517: INFO: Trying to get logs from node hunter-worker pod downward-api-d5469ace-beae-11ea-9e48-0242ac110017 container dapi-container: STEP: delete the pod Jul 5 11:01:18.594: INFO: Waiting for pod downward-api-d5469ace-beae-11ea-9e48-0242ac110017 to disappear Jul 5 11:01:18.599: INFO: Pod downward-api-d5469ace-beae-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:01:18.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tgzcb" for this suite. Jul 5 11:01:24.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:01:24.704: INFO: namespace: e2e-tests-downward-api-tgzcb, resource: bindings, ignored listing per whitelist Jul 5 11:01:24.736: INFO: namespace e2e-tests-downward-api-tgzcb deletion completed in 6.133473036s • [SLOW TEST:15.394 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:01:24.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-de1cdd29-beae-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume secrets Jul 5 11:01:24.867: INFO: Waiting up to 5m0s for pod "pod-secrets-de22acf4-beae-11ea-9e48-0242ac110017" in namespace "e2e-tests-secrets-nj66z" to be "success or failure" Jul 5 11:01:24.920: INFO: Pod "pod-secrets-de22acf4-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 52.961091ms Jul 5 11:01:26.924: INFO: Pod "pod-secrets-de22acf4-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057285805s Jul 5 11:01:28.928: INFO: Pod "pod-secrets-de22acf4-beae-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061302348s STEP: Saw pod success Jul 5 11:01:28.929: INFO: Pod "pod-secrets-de22acf4-beae-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:01:28.931: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-de22acf4-beae-11ea-9e48-0242ac110017 container secret-volume-test: STEP: delete the pod Jul 5 11:01:28.965: INFO: Waiting for pod pod-secrets-de22acf4-beae-11ea-9e48-0242ac110017 to disappear Jul 5 11:01:28.979: INFO: Pod pod-secrets-de22acf4-beae-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:01:28.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nj66z" for this suite. Jul 5 11:01:35.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:01:35.125: INFO: namespace: e2e-tests-secrets-nj66z, resource: bindings, ignored listing per whitelist Jul 5 11:01:35.180: INFO: namespace e2e-tests-secrets-nj66z deletion completed in 6.19712225s • [SLOW TEST:10.444 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:01:35.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 5 11:01:35.344: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:35.347: INFO: Number of nodes with available pods: 0 Jul 5 11:01:35.347: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:01:36.353: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:36.358: INFO: Number of nodes with available pods: 0 Jul 5 11:01:36.358: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:01:37.552: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:37.637: INFO: Number of nodes with available pods: 0 Jul 5 11:01:37.637: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:01:38.352: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:38.355: INFO: Number of nodes with available pods: 0 Jul 5 11:01:38.355: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:01:39.352: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:39.356: INFO: Number of nodes with available pods: 1 Jul 5 11:01:39.356: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:01:40.352: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:40.356: INFO: Number of nodes with available pods: 2 Jul 5 11:01:40.356: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 5 11:01:40.395: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:40.398: INFO: Number of nodes with available pods: 1 Jul 5 11:01:40.398: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:41.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:41.406: INFO: Number of nodes with available pods: 1 Jul 5 11:01:41.406: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:42.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:42.406: INFO: Number of nodes with available pods: 1 Jul 5 11:01:42.406: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:43.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:43.407: INFO: Number of nodes with available pods: 1 Jul 5 11:01:43.407: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:44.404: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:44.407: INFO: Number of nodes with available pods: 1 Jul 5 11:01:44.407: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:45.404: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:45.407: INFO: Number of nodes with available pods: 1 Jul 5 11:01:45.407: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:46.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:46.407: INFO: Number of nodes with available pods: 1 Jul 5 11:01:46.407: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:47.451: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:47.454: INFO: Number of nodes with available pods: 1 Jul 5 11:01:47.454: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:48.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:48.407: INFO: Number of nodes with available pods: 1 Jul 5 11:01:48.407: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:49.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:49.407: INFO: Number of nodes with available pods: 1 Jul 5 11:01:49.407: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:50.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:50.407: INFO: Number of nodes with available pods: 1 Jul 5 11:01:50.407: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:51.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:51.405: INFO: Number of nodes with available pods: 1 Jul 5 11:01:51.405: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:52.637: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:52.691: INFO: Number of nodes with available pods: 1 Jul 5 11:01:52.691: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:53.406: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:53.409: INFO: Number of nodes with available pods: 1 Jul 5 11:01:53.409: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:55.068: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:55.292: INFO: Number of nodes with available pods: 1 Jul 5 11:01:55.292: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:55.486: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:55.490: INFO: Number of nodes with available pods: 1 Jul 5 11:01:55.490: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:56.408: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:56.410: INFO: Number of nodes with available pods: 1 Jul 5 11:01:56.410: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:57.468: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:57.830: INFO: Number of nodes with available pods: 1 Jul 5 11:01:57.830: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:58.521: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:58.524: INFO: Number of nodes with available pods: 1 Jul 5 11:01:58.524: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:01:59.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:01:59.407: INFO: Number of nodes with available pods: 1 Jul 5 11:01:59.407: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:02:00.480: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:02:00.482: INFO: Number of nodes with available pods: 1 Jul 5 11:02:00.482: INFO: Node hunter-worker2 is running more than one daemon pod Jul 5 11:02:01.402: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:02:01.405: INFO: Number of nodes with available pods: 2 Jul 5 11:02:01.405: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-jfcr5, will wait for the garbage collector to delete the pods Jul 5 11:02:01.465: INFO: Deleting DaemonSet.extensions daemon-set took: 5.770602ms Jul 5 11:02:01.565: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.270025ms Jul 5 11:02:13.869: INFO: Number of nodes with available pods: 0 Jul 5 11:02:13.869: INFO: Number of running nodes: 0, number of available pods: 0 Jul 5 11:02:13.874: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jfcr5/daemonsets","resourceVersion":"221511"},"items":null} Jul 5 11:02:13.877: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jfcr5/pods","resourceVersion":"221511"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:02:13.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-jfcr5" for this suite. Jul 5 11:02:19.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:02:19.929: INFO: namespace: e2e-tests-daemonsets-jfcr5, resource: bindings, ignored listing per whitelist Jul 5 11:02:19.995: INFO: namespace e2e-tests-daemonsets-jfcr5 deletion completed in 6.106466054s • [SLOW TEST:44.816 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:02:19.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 5 11:02:20.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff0f3b1d-beae-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-tdqk6" to be "success or failure" Jul 5 11:02:20.148: INFO: Pod "downwardapi-volume-ff0f3b1d-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.83153ms Jul 5 11:02:22.152: INFO: Pod "downwardapi-volume-ff0f3b1d-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010974332s Jul 5 11:02:24.234: INFO: Pod "downwardapi-volume-ff0f3b1d-beae-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093308462s Jul 5 11:02:26.239: INFO: Pod "downwardapi-volume-ff0f3b1d-beae-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097891151s STEP: Saw pod success Jul 5 11:02:26.239: INFO: Pod "downwardapi-volume-ff0f3b1d-beae-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:02:26.242: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ff0f3b1d-beae-11ea-9e48-0242ac110017 container client-container: STEP: delete the pod Jul 5 11:02:26.287: INFO: Waiting for pod downwardapi-volume-ff0f3b1d-beae-11ea-9e48-0242ac110017 to disappear Jul 5 11:02:26.298: INFO: Pod downwardapi-volume-ff0f3b1d-beae-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:02:26.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tdqk6" for this suite. Jul 5 11:02:32.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:02:32.352: INFO: namespace: e2e-tests-downward-api-tdqk6, resource: bindings, ignored listing per whitelist Jul 5 11:02:32.448: INFO: namespace e2e-tests-downward-api-tdqk6 deletion completed in 6.145970882s • [SLOW TEST:12.453 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:02:32.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 5 11:02:32.623: INFO: Waiting up to 5m0s for pod "pod-0676be4b-beaf-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-7p8lh" to be "success or failure" Jul 5 11:02:32.636: INFO: Pod "pod-0676be4b-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.025191ms Jul 5 11:02:34.640: INFO: Pod "pod-0676be4b-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016881024s Jul 5 11:02:36.647: INFO: Pod "pod-0676be4b-beaf-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023723073s STEP: Saw pod success Jul 5 11:02:36.647: INFO: Pod "pod-0676be4b-beaf-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:02:36.649: INFO: Trying to get logs from node hunter-worker2 pod pod-0676be4b-beaf-11ea-9e48-0242ac110017 container test-container: STEP: delete the pod Jul 5 11:02:36.669: INFO: Waiting for pod pod-0676be4b-beaf-11ea-9e48-0242ac110017 to disappear Jul 5 11:02:36.691: INFO: Pod pod-0676be4b-beaf-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:02:36.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7p8lh" for this suite. Jul 5 11:02:42.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:02:42.911: INFO: namespace: e2e-tests-emptydir-7p8lh, resource: bindings, ignored listing per whitelist Jul 5 11:02:42.984: INFO: namespace e2e-tests-emptydir-7p8lh deletion completed in 6.289245424s • [SLOW TEST:10.536 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:02:42.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 5 11:02:43.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cc51bc6-beaf-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-2sjmc" to be "success or failure" Jul 5 11:02:43.125: INFO: Pod "downwardapi-volume-0cc51bc6-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.446294ms Jul 5 11:02:45.129: INFO: Pod "downwardapi-volume-0cc51bc6-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021656431s Jul 5 11:02:47.134: INFO: Pod "downwardapi-volume-0cc51bc6-beaf-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026136079s STEP: Saw pod success Jul 5 11:02:47.134: INFO: Pod "downwardapi-volume-0cc51bc6-beaf-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:02:47.137: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0cc51bc6-beaf-11ea-9e48-0242ac110017 container client-container: STEP: delete the pod Jul 5 11:02:47.225: INFO: Waiting for pod downwardapi-volume-0cc51bc6-beaf-11ea-9e48-0242ac110017 to disappear Jul 5 11:02:47.263: INFO: Pod downwardapi-volume-0cc51bc6-beaf-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:02:47.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2sjmc" for this suite. Jul 5 11:02:53.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:02:53.367: INFO: namespace: e2e-tests-downward-api-2sjmc, resource: bindings, ignored listing per whitelist Jul 5 11:02:53.367: INFO: namespace e2e-tests-downward-api-2sjmc deletion completed in 6.099663433s • [SLOW TEST:10.382 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:02:53.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-12fa76bc-beaf-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume secrets Jul 5 11:02:53.540: INFO: Waiting up to 5m0s for pod "pod-secrets-12fcc4a0-beaf-11ea-9e48-0242ac110017" in namespace "e2e-tests-secrets-tgknz" to be "success or failure" Jul 5 11:02:53.635: INFO: Pod "pod-secrets-12fcc4a0-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 94.920547ms Jul 5 11:02:55.821: INFO: Pod "pod-secrets-12fcc4a0-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280787614s Jul 5 11:02:57.825: INFO: Pod "pod-secrets-12fcc4a0-beaf-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.284548114s STEP: Saw pod success Jul 5 11:02:57.825: INFO: Pod "pod-secrets-12fcc4a0-beaf-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:02:57.827: INFO: Trying to get logs from node hunter-worker pod pod-secrets-12fcc4a0-beaf-11ea-9e48-0242ac110017 container secret-volume-test: STEP: delete the pod Jul 5 11:02:57.845: INFO: Waiting for pod pod-secrets-12fcc4a0-beaf-11ea-9e48-0242ac110017 to disappear Jul 5 11:02:57.850: INFO: Pod pod-secrets-12fcc4a0-beaf-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:02:57.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tgknz" for this suite. Jul 5 11:03:03.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:03:03.941: INFO: namespace: e2e-tests-secrets-tgknz, resource: bindings, ignored listing per whitelist Jul 5 11:03:04.025: INFO: namespace e2e-tests-secrets-tgknz deletion completed in 6.172006632s • [SLOW TEST:10.657 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:03:04.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 5 11:03:04.109: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:03:10.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-t964h" for this suite. Jul 5 11:03:54.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:03:54.608: INFO: namespace: e2e-tests-pods-t964h, resource: bindings, ignored listing per whitelist Jul 5 11:03:54.643: INFO: namespace e2e-tests-pods-t964h deletion completed in 44.2671761s • [SLOW TEST:50.618 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:03:54.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 5 11:03:54.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hfvdk' Jul 5 11:03:59.393: INFO: stderr: "" Jul 5 11:03:59.393: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jul 5 11:03:59.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hfvdk' Jul 5 11:04:02.472: INFO: stderr: "" Jul 5 11:04:02.472: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:04:02.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hfvdk" for this suite. Jul 5 11:04:08.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:04:08.661: INFO: namespace: e2e-tests-kubectl-hfvdk, resource: bindings, ignored listing per whitelist Jul 5 11:04:08.683: INFO: namespace e2e-tests-kubectl-hfvdk deletion completed in 6.183494438s • [SLOW TEST:14.039 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:04:08.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 5 11:04:26.858: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:26.858: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:26.895982 6 log.go:172] (0xc0024202c0) (0xc001e95900) Create stream I0705 11:04:26.896025 6 log.go:172] (0xc0024202c0) (0xc001e95900) Stream added, broadcasting: 1 I0705 11:04:26.904010 6 log.go:172] (0xc0024202c0) Reply frame received for 1 I0705 11:04:26.904068 6 log.go:172] (0xc0024202c0) (0xc0009a6000) Create stream I0705 11:04:26.904082 6 log.go:172] (0xc0024202c0) (0xc0009a6000) Stream added, broadcasting: 3 I0705 11:04:26.906015 6 log.go:172] (0xc0024202c0) Reply frame received for 3 I0705 11:04:26.906075 6 log.go:172] (0xc0024202c0) (0xc001ac6000) Create stream I0705 11:04:26.906102 6 log.go:172] (0xc0024202c0) (0xc001ac6000) Stream added, broadcasting: 5 I0705 11:04:26.907010 6 log.go:172] (0xc0024202c0) Reply frame received for 5 I0705 11:04:26.983712 6 log.go:172] (0xc0024202c0) Data frame received for 5 I0705 11:04:26.983750 6 log.go:172] (0xc001ac6000) (5) Data frame handling I0705 11:04:26.983775 6 log.go:172] (0xc0024202c0) Data frame received for 3 I0705 11:04:26.983788 6 log.go:172] (0xc0009a6000) (3) Data frame handling I0705 11:04:26.983826 6 log.go:172] (0xc0009a6000) (3) Data frame sent I0705 11:04:26.983852 6 log.go:172] (0xc0024202c0) Data frame received for 3 I0705 11:04:26.983869 6 log.go:172] (0xc0009a6000) (3) Data frame handling I0705 11:04:26.985896 6 log.go:172] (0xc0024202c0) Data frame received for 1 I0705 11:04:26.985933 6 log.go:172] (0xc001e95900) (1) Data frame handling I0705 11:04:26.985954 6 log.go:172] (0xc001e95900) (1) Data frame sent I0705 11:04:26.985972 6 log.go:172] (0xc0024202c0) (0xc001e95900) Stream removed, broadcasting: 1 I0705 11:04:26.985987 6 log.go:172] (0xc0024202c0) Go away received I0705 11:04:26.986237 6 log.go:172] (0xc0024202c0) (0xc001e95900) Stream removed, broadcasting: 1 I0705 11:04:26.986271 6 log.go:172] (0xc0024202c0) (0xc0009a6000) Stream removed, broadcasting: 3 I0705 11:04:26.986298 6 log.go:172] (0xc0024202c0) (0xc001ac6000) Stream removed, broadcasting: 5 Jul 5 11:04:26.986: INFO: Exec stderr: "" Jul 5 11:04:26.986: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:26.986: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:27.018634 6 log.go:172] (0xc0009b7130) (0xc0009a6820) Create stream I0705 11:04:27.018664 6 log.go:172] (0xc0009b7130) (0xc0009a6820) Stream added, broadcasting: 1 I0705 11:04:27.020724 6 log.go:172] (0xc0009b7130) Reply frame received for 1 I0705 11:04:27.020776 6 log.go:172] (0xc0009b7130) (0xc001ac60a0) Create stream I0705 11:04:27.020790 6 log.go:172] (0xc0009b7130) (0xc001ac60a0) Stream added, broadcasting: 3 I0705 11:04:27.022152 6 log.go:172] (0xc0009b7130) Reply frame received for 3 I0705 11:04:27.022205 6 log.go:172] (0xc0009b7130) (0xc0006c20a0) Create stream I0705 11:04:27.022221 6 log.go:172] (0xc0009b7130) (0xc0006c20a0) Stream added, broadcasting: 5 I0705 11:04:27.023128 6 log.go:172] (0xc0009b7130) Reply frame received for 5 I0705 11:04:27.074646 6 log.go:172] (0xc0009b7130) Data frame received for 3 I0705 11:04:27.074748 6 log.go:172] (0xc001ac60a0) (3) Data frame handling I0705 11:04:27.074813 6 log.go:172] (0xc0009b7130) Data frame received for 5 I0705 11:04:27.074862 6 log.go:172] (0xc0006c20a0) (5) Data frame handling I0705 11:04:27.074892 6 log.go:172] (0xc001ac60a0) (3) Data frame sent I0705 11:04:27.074907 6 log.go:172] (0xc0009b7130) Data frame received for 3 I0705 11:04:27.074920 6 log.go:172] (0xc001ac60a0) (3) Data frame handling I0705 11:04:27.076393 6 log.go:172] (0xc0009b7130) Data frame received for 1 I0705 11:04:27.076409 6 log.go:172] (0xc0009a6820) (1) Data frame handling I0705 11:04:27.076431 6 log.go:172] (0xc0009a6820) (1) Data frame sent I0705 11:04:27.076443 6 log.go:172] (0xc0009b7130) (0xc0009a6820) Stream removed, broadcasting: 1 I0705 11:04:27.076455 6 log.go:172] (0xc0009b7130) Go away received I0705 11:04:27.076664 6 log.go:172] (0xc0009b7130) (0xc0009a6820) Stream removed, broadcasting: 1 I0705 11:04:27.076695 6 log.go:172] (0xc0009b7130) (0xc001ac60a0) Stream removed, broadcasting: 3 I0705 11:04:27.076718 6 log.go:172] (0xc0009b7130) (0xc0006c20a0) Stream removed, broadcasting: 5 Jul 5 11:04:27.076: INFO: Exec stderr: "" Jul 5 11:04:27.076: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:27.076: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:27.127970 6 log.go:172] (0xc0009b7600) (0xc0009a6be0) Create stream I0705 11:04:27.128008 6 log.go:172] (0xc0009b7600) (0xc0009a6be0) Stream added, broadcasting: 1 I0705 11:04:27.130883 6 log.go:172] (0xc0009b7600) Reply frame received for 1 I0705 11:04:27.130948 6 log.go:172] (0xc0009b7600) (0xc0006c21e0) Create stream I0705 11:04:27.130977 6 log.go:172] (0xc0009b7600) (0xc0006c21e0) Stream added, broadcasting: 3 I0705 11:04:27.132167 6 log.go:172] (0xc0009b7600) Reply frame received for 3 I0705 11:04:27.132220 6 log.go:172] (0xc0009b7600) (0xc001ac6140) Create stream I0705 11:04:27.132245 6 log.go:172] (0xc0009b7600) (0xc001ac6140) Stream added, broadcasting: 5 I0705 11:04:27.133490 6 log.go:172] (0xc0009b7600) Reply frame received for 5 I0705 11:04:27.202365 6 log.go:172] (0xc0009b7600) Data frame received for 5 I0705 11:04:27.202404 6 log.go:172] (0xc001ac6140) (5) Data frame handling I0705 11:04:27.202442 6 log.go:172] (0xc0009b7600) Data frame received for 3 I0705 11:04:27.202482 6 log.go:172] (0xc0006c21e0) (3) Data frame handling I0705 11:04:27.202511 6 log.go:172] (0xc0006c21e0) (3) Data frame sent I0705 11:04:27.202531 6 log.go:172] (0xc0009b7600) Data frame received for 3 I0705 11:04:27.202548 6 log.go:172] (0xc0006c21e0) (3) Data frame handling I0705 11:04:27.204021 6 log.go:172] (0xc0009b7600) Data frame received for 1 I0705 11:04:27.204050 6 log.go:172] (0xc0009a6be0) (1) Data frame handling I0705 11:04:27.204077 6 log.go:172] (0xc0009a6be0) (1) Data frame sent I0705 11:04:27.204106 6 log.go:172] (0xc0009b7600) (0xc0009a6be0) Stream removed, broadcasting: 1 I0705 11:04:27.204168 6 log.go:172] (0xc0009b7600) Go away received I0705 11:04:27.204225 6 log.go:172] (0xc0009b7600) (0xc0009a6be0) Stream removed, broadcasting: 1 I0705 11:04:27.204248 6 log.go:172] (0xc0009b7600) (0xc0006c21e0) Stream removed, broadcasting: 3 I0705 11:04:27.204264 6 log.go:172] (0xc0009b7600) (0xc001ac6140) Stream removed, broadcasting: 5 Jul 5 11:04:27.204: INFO: Exec stderr: "" Jul 5 11:04:27.204: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:27.204: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:27.242159 6 log.go:172] (0xc0009b7ad0) (0xc0009a6f00) Create stream I0705 11:04:27.242192 6 log.go:172] (0xc0009b7ad0) (0xc0009a6f00) Stream added, broadcasting: 1 I0705 11:04:27.245378 6 log.go:172] (0xc0009b7ad0) Reply frame received for 1 I0705 11:04:27.245421 6 log.go:172] (0xc0009b7ad0) (0xc0009a6fa0) Create stream I0705 11:04:27.245435 6 log.go:172] (0xc0009b7ad0) (0xc0009a6fa0) Stream added, broadcasting: 3 I0705 11:04:27.246552 6 log.go:172] (0xc0009b7ad0) Reply frame received for 3 I0705 11:04:27.246593 6 log.go:172] (0xc0009b7ad0) (0xc0009a7040) Create stream I0705 11:04:27.246608 6 log.go:172] (0xc0009b7ad0) (0xc0009a7040) Stream added, broadcasting: 5 I0705 11:04:27.247555 6 log.go:172] (0xc0009b7ad0) Reply frame received for 5 I0705 11:04:27.298716 6 log.go:172] (0xc0009b7ad0) Data frame received for 5 I0705 11:04:27.298748 6 log.go:172] (0xc0009a7040) (5) Data frame handling I0705 11:04:27.298786 6 log.go:172] (0xc0009b7ad0) Data frame received for 3 I0705 11:04:27.298813 6 log.go:172] (0xc0009a6fa0) (3) Data frame handling I0705 11:04:27.298848 6 log.go:172] (0xc0009a6fa0) (3) Data frame sent I0705 11:04:27.298865 6 log.go:172] (0xc0009b7ad0) Data frame received for 3 I0705 11:04:27.298878 6 log.go:172] (0xc0009a6fa0) (3) Data frame handling I0705 11:04:27.300326 6 log.go:172] (0xc0009b7ad0) Data frame received for 1 I0705 11:04:27.300353 6 log.go:172] (0xc0009a6f00) (1) Data frame handling I0705 11:04:27.300372 6 log.go:172] (0xc0009a6f00) (1) Data frame sent I0705 11:04:27.300393 6 log.go:172] (0xc0009b7ad0) (0xc0009a6f00) Stream removed, broadcasting: 1 I0705 11:04:27.300412 6 log.go:172] (0xc0009b7ad0) Go away received I0705 11:04:27.300486 6 log.go:172] (0xc0009b7ad0) (0xc0009a6f00) Stream removed, broadcasting: 1 I0705 11:04:27.300504 6 log.go:172] (0xc0009b7ad0) (0xc0009a6fa0) Stream removed, broadcasting: 3 I0705 11:04:27.300517 6 log.go:172] (0xc0009b7ad0) (0xc0009a7040) Stream removed, broadcasting: 5 Jul 5 11:04:27.300: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 5 11:04:27.300: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:27.300: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:27.352547 6 log.go:172] (0xc0009e7ce0) (0xc001ac63c0) Create stream I0705 11:04:27.352605 6 log.go:172] (0xc0009e7ce0) (0xc001ac63c0) Stream added, broadcasting: 1 I0705 11:04:27.358461 6 log.go:172] (0xc0009e7ce0) Reply frame received for 1 I0705 11:04:27.358532 6 log.go:172] (0xc0009e7ce0) (0xc0006c2320) Create stream I0705 11:04:27.358559 6 log.go:172] (0xc0009e7ce0) (0xc0006c2320) Stream added, broadcasting: 3 I0705 11:04:27.360275 6 log.go:172] (0xc0009e7ce0) Reply frame received for 3 I0705 11:04:27.360345 6 log.go:172] (0xc0009e7ce0) (0xc00235e000) Create stream I0705 11:04:27.360374 6 log.go:172] (0xc0009e7ce0) (0xc00235e000) Stream added, broadcasting: 5 I0705 11:04:27.361574 6 log.go:172] (0xc0009e7ce0) Reply frame received for 5 I0705 11:04:27.428711 6 log.go:172] (0xc0009e7ce0) Data frame received for 5 I0705 11:04:27.428750 6 log.go:172] (0xc00235e000) (5) Data frame handling I0705 11:04:27.428784 6 log.go:172] (0xc0009e7ce0) Data frame received for 3 I0705 11:04:27.428809 6 log.go:172] (0xc0006c2320) (3) Data frame handling I0705 11:04:27.428838 6 log.go:172] (0xc0006c2320) (3) Data frame sent I0705 11:04:27.428888 6 log.go:172] (0xc0009e7ce0) Data frame received for 3 I0705 11:04:27.428912 6 log.go:172] (0xc0006c2320) (3) Data frame handling I0705 11:04:27.430517 6 log.go:172] (0xc0009e7ce0) Data frame received for 1 I0705 11:04:27.430540 6 log.go:172] (0xc001ac63c0) (1) Data frame handling I0705 11:04:27.430554 6 log.go:172] (0xc001ac63c0) (1) Data frame sent I0705 11:04:27.430568 6 log.go:172] (0xc0009e7ce0) (0xc001ac63c0) Stream removed, broadcasting: 1 I0705 11:04:27.430677 6 log.go:172] (0xc0009e7ce0) (0xc001ac63c0) Stream removed, broadcasting: 1 I0705 11:04:27.430695 6 log.go:172] (0xc0009e7ce0) (0xc0006c2320) Stream removed, broadcasting: 3 I0705 11:04:27.430765 6 log.go:172] (0xc0009e7ce0) Go away received I0705 11:04:27.430862 6 log.go:172] (0xc0009e7ce0) (0xc00235e000) Stream removed, broadcasting: 5 Jul 5 11:04:27.430: INFO: Exec stderr: "" Jul 5 11:04:27.430: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:27.430: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:27.462348 6 log.go:172] (0xc0000ead10) (0xc00235e320) Create stream I0705 11:04:27.462394 6 log.go:172] (0xc0000ead10) (0xc00235e320) Stream added, broadcasting: 1 I0705 11:04:27.463911 6 log.go:172] (0xc0000ead10) Reply frame received for 1 I0705 11:04:27.463952 6 log.go:172] (0xc0000ead10) (0xc002238000) Create stream I0705 11:04:27.463967 6 log.go:172] (0xc0000ead10) (0xc002238000) Stream added, broadcasting: 3 I0705 11:04:27.465091 6 log.go:172] (0xc0000ead10) Reply frame received for 3 I0705 11:04:27.465282 6 log.go:172] (0xc0000ead10) (0xc001ac6460) Create stream I0705 11:04:27.465296 6 log.go:172] (0xc0000ead10) (0xc001ac6460) Stream added, broadcasting: 5 I0705 11:04:27.466160 6 log.go:172] (0xc0000ead10) Reply frame received for 5 I0705 11:04:27.528562 6 log.go:172] (0xc0000ead10) Data frame received for 3 I0705 11:04:27.528588 6 log.go:172] (0xc002238000) (3) Data frame handling I0705 11:04:27.528606 6 log.go:172] (0xc002238000) (3) Data frame sent I0705 11:04:27.528614 6 log.go:172] (0xc0000ead10) Data frame received for 3 I0705 11:04:27.528629 6 log.go:172] (0xc002238000) (3) Data frame handling I0705 11:04:27.528716 6 log.go:172] (0xc0000ead10) Data frame received for 5 I0705 11:04:27.528730 6 log.go:172] (0xc001ac6460) (5) Data frame handling I0705 11:04:27.530524 6 log.go:172] (0xc0000ead10) Data frame received for 1 I0705 11:04:27.530538 6 log.go:172] (0xc00235e320) (1) Data frame handling I0705 11:04:27.530546 6 log.go:172] (0xc00235e320) (1) Data frame sent I0705 11:04:27.530563 6 log.go:172] (0xc0000ead10) (0xc00235e320) Stream removed, broadcasting: 1 I0705 11:04:27.530582 6 log.go:172] (0xc0000ead10) Go away received I0705 11:04:27.530664 6 log.go:172] (0xc0000ead10) (0xc00235e320) Stream removed, broadcasting: 1 I0705 11:04:27.530695 6 log.go:172] (0xc0000ead10) (0xc002238000) Stream removed, broadcasting: 3 I0705 11:04:27.530723 6 log.go:172] (0xc0000ead10) (0xc001ac6460) Stream removed, broadcasting: 5 Jul 5 11:04:27.530: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 5 11:04:27.530: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:27.530: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:27.566918 6 log.go:172] (0xc002420210) (0xc001ac66e0) Create stream I0705 11:04:27.566948 6 log.go:172] (0xc002420210) (0xc001ac66e0) Stream added, broadcasting: 1 I0705 11:04:27.572124 6 log.go:172] (0xc002420210) Reply frame received for 1 I0705 11:04:27.572206 6 log.go:172] (0xc002420210) (0xc001d78000) Create stream I0705 11:04:27.572231 6 log.go:172] (0xc002420210) (0xc001d78000) Stream added, broadcasting: 3 I0705 11:04:27.573677 6 log.go:172] (0xc002420210) Reply frame received for 3 I0705 11:04:27.573800 6 log.go:172] (0xc002420210) (0xc00235e3c0) Create stream I0705 11:04:27.573812 6 log.go:172] (0xc002420210) (0xc00235e3c0) Stream added, broadcasting: 5 I0705 11:04:27.574623 6 log.go:172] (0xc002420210) Reply frame received for 5 I0705 11:04:27.633972 6 log.go:172] (0xc002420210) Data frame received for 5 I0705 11:04:27.634006 6 log.go:172] (0xc00235e3c0) (5) Data frame handling I0705 11:04:27.634051 6 log.go:172] (0xc002420210) Data frame received for 3 I0705 11:04:27.634101 6 log.go:172] (0xc001d78000) (3) Data frame handling I0705 11:04:27.634142 6 log.go:172] (0xc001d78000) (3) Data frame sent I0705 11:04:27.634181 6 log.go:172] (0xc002420210) Data frame received for 3 I0705 11:04:27.634204 6 log.go:172] (0xc001d78000) (3) Data frame handling I0705 11:04:27.635492 6 log.go:172] (0xc002420210) Data frame received for 1 I0705 11:04:27.635523 6 log.go:172] (0xc001ac66e0) (1) Data frame handling I0705 11:04:27.635540 6 log.go:172] (0xc001ac66e0) (1) Data frame sent I0705 11:04:27.635561 6 log.go:172] (0xc002420210) (0xc001ac66e0) Stream removed, broadcasting: 1 I0705 11:04:27.635670 6 log.go:172] (0xc002420210) (0xc001ac66e0) Stream removed, broadcasting: 1 I0705 11:04:27.635691 6 log.go:172] (0xc002420210) (0xc001d78000) Stream removed, broadcasting: 3 I0705 11:04:27.635704 6 log.go:172] (0xc002420210) (0xc00235e3c0) Stream removed, broadcasting: 5 Jul 5 11:04:27.635: INFO: Exec stderr: "" Jul 5 11:04:27.635: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:27.635: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:27.635811 6 log.go:172] (0xc002420210) Go away received I0705 11:04:27.663293 6 log.go:172] (0xc000aca2c0) (0xc001d78280) Create stream I0705 11:04:27.663323 6 log.go:172] (0xc000aca2c0) (0xc001d78280) Stream added, broadcasting: 1 I0705 11:04:27.665835 6 log.go:172] (0xc000aca2c0) Reply frame received for 1 I0705 11:04:27.665872 6 log.go:172] (0xc000aca2c0) (0xc0022380a0) Create stream I0705 11:04:27.665883 6 log.go:172] (0xc000aca2c0) (0xc0022380a0) Stream added, broadcasting: 3 I0705 11:04:27.667047 6 log.go:172] (0xc000aca2c0) Reply frame received for 3 I0705 11:04:27.667088 6 log.go:172] (0xc000aca2c0) (0xc00235e460) Create stream I0705 11:04:27.667101 6 log.go:172] (0xc000aca2c0) (0xc00235e460) Stream added, broadcasting: 5 I0705 11:04:27.668356 6 log.go:172] (0xc000aca2c0) Reply frame received for 5 I0705 11:04:27.728721 6 log.go:172] (0xc000aca2c0) Data frame received for 5 I0705 11:04:27.728758 6 log.go:172] (0xc00235e460) (5) Data frame handling I0705 11:04:27.728800 6 log.go:172] (0xc000aca2c0) Data frame received for 3 I0705 11:04:27.728835 6 log.go:172] (0xc0022380a0) (3) Data frame handling I0705 11:04:27.728874 6 log.go:172] (0xc0022380a0) (3) Data frame sent I0705 11:04:27.728902 6 log.go:172] (0xc000aca2c0) Data frame received for 3 I0705 11:04:27.728936 6 log.go:172] (0xc0022380a0) (3) Data frame handling I0705 11:04:27.731056 6 log.go:172] (0xc000aca2c0) Data frame received for 1 I0705 11:04:27.731093 6 log.go:172] (0xc001d78280) (1) Data frame handling I0705 11:04:27.731132 6 log.go:172] (0xc001d78280) (1) Data frame sent I0705 11:04:27.731161 6 log.go:172] (0xc000aca2c0) (0xc001d78280) Stream removed, broadcasting: 1 I0705 11:04:27.731195 6 log.go:172] (0xc000aca2c0) Go away received I0705 11:04:27.731339 6 log.go:172] (0xc000aca2c0) (0xc001d78280) Stream removed, broadcasting: 1 I0705 11:04:27.731371 6 log.go:172] (0xc000aca2c0) (0xc0022380a0) Stream removed, broadcasting: 3 I0705 11:04:27.731393 6 log.go:172] (0xc000aca2c0) (0xc00235e460) Stream removed, broadcasting: 5 Jul 5 11:04:27.731: INFO: Exec stderr: "" Jul 5 11:04:27.731: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:27.731: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:27.763214 6 log.go:172] (0xc0024209a0) (0xc001ac6960) Create stream I0705 11:04:27.763240 6 log.go:172] (0xc0024209a0) (0xc001ac6960) Stream added, broadcasting: 1 I0705 11:04:27.764857 6 log.go:172] (0xc0024209a0) Reply frame received for 1 I0705 11:04:27.764886 6 log.go:172] (0xc0024209a0) (0xc001d78320) Create stream I0705 11:04:27.764896 6 log.go:172] (0xc0024209a0) (0xc001d78320) Stream added, broadcasting: 3 I0705 11:04:27.766146 6 log.go:172] (0xc0024209a0) Reply frame received for 3 I0705 11:04:27.766201 6 log.go:172] (0xc0024209a0) (0xc00235e500) Create stream I0705 11:04:27.766214 6 log.go:172] (0xc0024209a0) (0xc00235e500) Stream added, broadcasting: 5 I0705 11:04:27.766976 6 log.go:172] (0xc0024209a0) Reply frame received for 5 I0705 11:04:27.847654 6 log.go:172] (0xc0024209a0) Data frame received for 5 I0705 11:04:27.847678 6 log.go:172] (0xc00235e500) (5) Data frame handling I0705 11:04:27.847699 6 log.go:172] (0xc0024209a0) Data frame received for 3 I0705 11:04:27.847724 6 log.go:172] (0xc001d78320) (3) Data frame handling I0705 11:04:27.847739 6 log.go:172] (0xc001d78320) (3) Data frame sent I0705 11:04:27.847748 6 log.go:172] (0xc0024209a0) Data frame received for 3 I0705 11:04:27.847754 6 log.go:172] (0xc001d78320) (3) Data frame handling I0705 11:04:27.849379 6 log.go:172] (0xc0024209a0) Data frame received for 1 I0705 11:04:27.849398 6 log.go:172] (0xc001ac6960) (1) Data frame handling I0705 11:04:27.849412 6 log.go:172] (0xc001ac6960) (1) Data frame sent I0705 11:04:27.849422 6 log.go:172] (0xc0024209a0) (0xc001ac6960) Stream removed, broadcasting: 1 I0705 11:04:27.849492 6 log.go:172] (0xc0024209a0) (0xc001ac6960) Stream removed, broadcasting: 1 I0705 11:04:27.849503 6 log.go:172] (0xc0024209a0) (0xc001d78320) Stream removed, broadcasting: 3 I0705 11:04:27.849573 6 log.go:172] (0xc0024209a0) Go away received I0705 11:04:27.849630 6 log.go:172] (0xc0024209a0) (0xc00235e500) Stream removed, broadcasting: 5 Jul 5 11:04:27.849: INFO: Exec stderr: "" Jul 5 11:04:27.849: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5szxj PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:04:27.849: INFO: >>> kubeConfig: /root/.kube/config I0705 11:04:27.878283 6 log.go:172] (0xc0000eb1e0) (0xc00235e780) Create stream I0705 11:04:27.878310 6 log.go:172] (0xc0000eb1e0) (0xc00235e780) Stream added, broadcasting: 1 I0705 11:04:27.880982 6 log.go:172] (0xc0000eb1e0) Reply frame received for 1 I0705 11:04:27.881009 6 log.go:172] (0xc0000eb1e0) (0xc00235e820) Create stream I0705 11:04:27.881019 6 log.go:172] (0xc0000eb1e0) (0xc00235e820) Stream added, broadcasting: 3 I0705 11:04:27.882058 6 log.go:172] (0xc0000eb1e0) Reply frame received for 3 I0705 11:04:27.882103 6 log.go:172] (0xc0000eb1e0) (0xc001d78500) Create stream I0705 11:04:27.882118 6 log.go:172] (0xc0000eb1e0) (0xc001d78500) Stream added, broadcasting: 5 I0705 11:04:27.883076 6 log.go:172] (0xc0000eb1e0) Reply frame received for 5 I0705 11:04:27.935783 6 log.go:172] (0xc0000eb1e0) Data frame received for 3 I0705 11:04:27.935825 6 log.go:172] (0xc0000eb1e0) Data frame received for 5 I0705 11:04:27.935876 6 log.go:172] (0xc001d78500) (5) Data frame handling I0705 11:04:27.936051 6 log.go:172] (0xc00235e820) (3) Data frame handling I0705 11:04:27.936095 6 log.go:172] (0xc00235e820) (3) Data frame sent I0705 11:04:27.936111 6 log.go:172] (0xc0000eb1e0) Data frame received for 3 I0705 11:04:27.936124 6 log.go:172] (0xc00235e820) (3) Data frame handling I0705 11:04:27.937643 6 log.go:172] (0xc0000eb1e0) Data frame received for 1 I0705 11:04:27.937661 6 log.go:172] (0xc00235e780) (1) Data frame handling I0705 11:04:27.937692 6 log.go:172] (0xc00235e780) (1) Data frame sent I0705 11:04:27.937708 6 log.go:172] (0xc0000eb1e0) (0xc00235e780) Stream removed, broadcasting: 1 I0705 11:04:27.937723 6 log.go:172] (0xc0000eb1e0) Go away received I0705 11:04:27.937812 6 log.go:172] (0xc0000eb1e0) (0xc00235e780) Stream removed, broadcasting: 1 I0705 11:04:27.937832 6 log.go:172] (0xc0000eb1e0) (0xc00235e820) Stream removed, broadcasting: 3 I0705 11:04:27.937844 6 log.go:172] (0xc0000eb1e0) (0xc001d78500) Stream removed, broadcasting: 5 Jul 5 11:04:27.937: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:04:27.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-5szxj" for this suite. Jul 5 11:05:16.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:05:16.059: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-5szxj, resource: bindings, ignored listing per whitelist Jul 5 11:05:16.106: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-5szxj deletion completed in 48.163823622s • [SLOW TEST:67.423 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:05:16.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 5 11:05:16.550: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:05:20.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-lxmz8" for this suite. Jul 5 11:06:04.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:06:04.708: INFO: namespace: e2e-tests-pods-lxmz8, resource: bindings, ignored listing per whitelist Jul 5 11:06:04.749: INFO: namespace e2e-tests-pods-lxmz8 deletion completed in 44.113225456s • [SLOW TEST:48.643 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:06:04.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 5 11:06:04.868: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-csspk" to be "success or failure" Jul 5 11:06:04.936: INFO: Pod "downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 68.612882ms Jul 5 11:06:06.940: INFO: Pod "downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072392334s Jul 5 11:06:09.464: INFO: Pod "downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.596568856s Jul 5 11:06:11.468: INFO: Pod "downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.600557142s Jul 5 11:06:13.471: INFO: Pod "downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.603648132s STEP: Saw pod success Jul 5 11:06:13.471: INFO: Pod "downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:06:13.474: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017 container client-container: STEP: delete the pod Jul 5 11:06:13.573: INFO: Waiting for pod downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017 to disappear Jul 5 11:06:13.577: INFO: Pod downwardapi-volume-85046ee8-beaf-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:06:13.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-csspk" for this suite. Jul 5 11:06:19.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:06:19.857: INFO: namespace: e2e-tests-downward-api-csspk, resource: bindings, ignored listing per whitelist Jul 5 11:06:19.882: INFO: namespace e2e-tests-downward-api-csspk deletion completed in 6.301332634s • [SLOW TEST:15.132 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:06:19.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 5 11:06:20.041: INFO: Waiting up to 5m0s for pod "pod-8e1073cc-beaf-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-gx8bf" to be "success or failure" Jul 5 11:06:20.044: INFO: Pod "pod-8e1073cc-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.59353ms Jul 5 11:06:22.062: INFO: Pod "pod-8e1073cc-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020956561s Jul 5 11:06:24.066: INFO: Pod "pod-8e1073cc-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024574371s Jul 5 11:06:26.154: INFO: Pod "pod-8e1073cc-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11250927s Jul 5 11:06:28.158: INFO: Pod "pod-8e1073cc-beaf-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116365816s STEP: Saw pod success Jul 5 11:06:28.158: INFO: Pod "pod-8e1073cc-beaf-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:06:28.161: INFO: Trying to get logs from node hunter-worker2 pod pod-8e1073cc-beaf-11ea-9e48-0242ac110017 container test-container: STEP: delete the pod Jul 5 11:06:28.196: INFO: Waiting for pod pod-8e1073cc-beaf-11ea-9e48-0242ac110017 to disappear Jul 5 11:06:28.302: INFO: Pod pod-8e1073cc-beaf-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:06:28.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gx8bf" for this suite. Jul 5 11:06:34.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:06:34.383: INFO: namespace: e2e-tests-emptydir-gx8bf, resource: bindings, ignored listing per whitelist Jul 5 11:06:34.410: INFO: namespace e2e-tests-emptydir-gx8bf deletion completed in 6.104592597s • [SLOW TEST:14.528 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:06:34.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-h27x7/secret-test-96d02d5c-beaf-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume secrets Jul 5 11:06:34.910: INFO: Waiting up to 5m0s for pod "pod-configmaps-96d576c6-beaf-11ea-9e48-0242ac110017" in namespace "e2e-tests-secrets-h27x7" to be "success or failure" Jul 5 11:06:34.932: INFO: Pod "pod-configmaps-96d576c6-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.081613ms Jul 5 11:06:36.936: INFO: Pod "pod-configmaps-96d576c6-beaf-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026324641s Jul 5 11:06:38.940: INFO: Pod "pod-configmaps-96d576c6-beaf-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030272961s STEP: Saw pod success Jul 5 11:06:38.940: INFO: Pod "pod-configmaps-96d576c6-beaf-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:06:38.943: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-96d576c6-beaf-11ea-9e48-0242ac110017 container env-test: STEP: delete the pod Jul 5 11:06:38.974: INFO: Waiting for pod pod-configmaps-96d576c6-beaf-11ea-9e48-0242ac110017 to disappear Jul 5 11:06:38.985: INFO: Pod pod-configmaps-96d576c6-beaf-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:06:38.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-h27x7" for this suite. Jul 5 11:06:45.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:06:45.171: INFO: namespace: e2e-tests-secrets-h27x7, resource: bindings, ignored listing per whitelist Jul 5 11:06:45.227: INFO: namespace e2e-tests-secrets-h27x7 deletion completed in 6.115247929s • [SLOW TEST:10.817 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:06:45.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-xkht STEP: Creating a pod to test atomic-volume-subpath Jul 5 11:06:45.376: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xkht" in namespace "e2e-tests-subpath-d7hkv" to be "success or failure" Jul 5 11:06:45.396: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Pending", Reason="", readiness=false. Elapsed: 19.584801ms Jul 5 11:06:47.400: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023743828s Jul 5 11:06:49.404: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027685151s Jul 5 11:06:51.408: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031507455s Jul 5 11:06:53.412: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 8.035931139s Jul 5 11:06:55.416: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 10.040012379s Jul 5 11:06:57.421: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 12.044519467s Jul 5 11:06:59.425: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 14.048516201s Jul 5 11:07:01.428: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 16.052045618s Jul 5 11:07:03.433: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 18.056675764s Jul 5 11:07:05.437: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 20.061051089s Jul 5 11:07:07.441: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 22.0644667s Jul 5 11:07:09.445: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 24.069038175s Jul 5 11:07:11.450: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Running", Reason="", readiness=false. Elapsed: 26.073622504s Jul 5 11:07:13.453: INFO: Pod "pod-subpath-test-secret-xkht": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.077111334s STEP: Saw pod success Jul 5 11:07:13.453: INFO: Pod "pod-subpath-test-secret-xkht" satisfied condition "success or failure" Jul 5 11:07:13.456: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-xkht container test-container-subpath-secret-xkht: STEP: delete the pod Jul 5 11:07:13.496: INFO: Waiting for pod pod-subpath-test-secret-xkht to disappear Jul 5 11:07:13.553: INFO: Pod pod-subpath-test-secret-xkht no longer exists STEP: Deleting pod pod-subpath-test-secret-xkht Jul 5 11:07:13.553: INFO: Deleting pod "pod-subpath-test-secret-xkht" in namespace "e2e-tests-subpath-d7hkv" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:07:13.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-d7hkv" for this suite. Jul 5 11:07:19.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:07:19.597: INFO: namespace: e2e-tests-subpath-d7hkv, resource: bindings, ignored listing per whitelist Jul 5 11:07:19.637: INFO: namespace e2e-tests-subpath-d7hkv deletion completed in 6.07886984s • [SLOW TEST:34.410 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:07:19.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 5 11:07:19.804: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:07:27.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rrl9f" for this suite. Jul 5 11:07:48.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:07:48.094: INFO: namespace: e2e-tests-init-container-rrl9f, resource: bindings, ignored listing per whitelist Jul 5 11:07:48.122: INFO: namespace e2e-tests-init-container-rrl9f deletion completed in 20.163856161s • [SLOW TEST:28.484 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:07:48.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 5 11:07:56.257: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:07:56.268: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:07:58.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:07:58.273: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:08:00.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:08:00.273: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:08:02.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:08:02.273: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:08:04.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:08:04.273: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:08:06.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:08:06.273: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:08:08.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:08:08.273: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:08:10.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:08:10.273: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:08:12.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:08:12.273: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:08:14.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:08:14.273: INFO: Pod pod-with-prestop-exec-hook still exists Jul 5 11:08:16.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 5 11:08:16.273: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:08:16.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6hsnk" for this suite. Jul 5 11:08:38.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:08:38.331: INFO: namespace: e2e-tests-container-lifecycle-hook-6hsnk, resource: bindings, ignored listing per whitelist Jul 5 11:08:38.370: INFO: namespace e2e-tests-container-lifecycle-hook-6hsnk deletion completed in 22.085773625s • [SLOW TEST:50.248 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:08:38.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 5 11:08:38.553: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:08:38.555: INFO: Number of nodes with available pods: 0 Jul 5 11:08:38.556: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:08:39.585: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:08:39.588: INFO: Number of nodes with available pods: 0 Jul 5 11:08:39.588: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:08:40.560: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:08:40.563: INFO: Number of nodes with available pods: 0 Jul 5 11:08:40.563: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:08:41.634: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:08:41.648: INFO: Number of nodes with available pods: 0 Jul 5 11:08:41.648: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:08:42.561: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:08:42.565: INFO: Number of nodes with available pods: 1 Jul 5 11:08:42.565: INFO: Node hunter-worker is running more than one daemon pod Jul 5 11:08:43.561: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:08:43.564: INFO: Number of nodes with available pods: 2 Jul 5 11:08:43.564: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 5 11:08:43.582: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 5 11:08:43.587: INFO: Number of nodes with available pods: 2 Jul 5 11:08:43.587: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vprlz, will wait for the garbage collector to delete the pods Jul 5 11:08:44.676: INFO: Deleting DaemonSet.extensions daemon-set took: 6.961538ms Jul 5 11:08:44.776: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.321732ms Jul 5 11:08:53.679: INFO: Number of nodes with available pods: 0 Jul 5 11:08:53.679: INFO: Number of running nodes: 0, number of available pods: 0 Jul 5 11:08:53.682: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vprlz/daemonsets","resourceVersion":"222771"},"items":null} Jul 5 11:08:53.684: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vprlz/pods","resourceVersion":"222771"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:08:53.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-vprlz" for this suite. Jul 5 11:08:59.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:08:59.759: INFO: namespace: e2e-tests-daemonsets-vprlz, resource: bindings, ignored listing per whitelist Jul 5 11:08:59.822: INFO: namespace e2e-tests-daemonsets-vprlz deletion completed in 6.093143052s • [SLOW TEST:21.452 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:08:59.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:09:03.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-26s6f" for this suite. Jul 5 11:09:43.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:09:44.042: INFO: namespace: e2e-tests-kubelet-test-26s6f, resource: bindings, ignored listing per whitelist Jul 5 11:09:44.044: INFO: namespace e2e-tests-kubelet-test-26s6f deletion completed in 40.086947133s • [SLOW TEST:44.222 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:09:44.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 5 11:09:48.836: INFO: Successfully updated pod "pod-update-07ca1792-beb0-11ea-9e48-0242ac110017" STEP: verifying the updated pod is in kubernetes Jul 5 11:09:49.182: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:09:49.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-25nmv" for this suite. Jul 5 11:10:11.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:10:11.242: INFO: namespace: e2e-tests-pods-25nmv, resource: bindings, ignored listing per whitelist Jul 5 11:10:11.300: INFO: namespace e2e-tests-pods-25nmv deletion completed in 22.113434173s • [SLOW TEST:27.255 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:10:11.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9nhxt STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 5 11:10:11.398: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 5 11:10:33.570: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.134:8080/dial?request=hostName&protocol=udp&host=10.244.1.74&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-9nhxt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:10:33.570: INFO: >>> kubeConfig: /root/.kube/config I0705 11:10:33.608107 6 log.go:172] (0xc0009e7ce0) (0xc000398460) Create stream I0705 11:10:33.608152 6 log.go:172] (0xc0009e7ce0) (0xc000398460) Stream added, broadcasting: 1 I0705 11:10:33.610318 6 log.go:172] (0xc0009e7ce0) Reply frame received for 1 I0705 11:10:33.610369 6 log.go:172] (0xc0009e7ce0) (0xc000ffe8c0) Create stream I0705 11:10:33.610386 6 log.go:172] (0xc0009e7ce0) (0xc000ffe8c0) Stream added, broadcasting: 3 I0705 11:10:33.611385 6 log.go:172] (0xc0009e7ce0) Reply frame received for 3 I0705 11:10:33.611444 6 log.go:172] (0xc0009e7ce0) (0xc000ffe960) Create stream I0705 11:10:33.611460 6 log.go:172] (0xc0009e7ce0) (0xc000ffe960) Stream added, broadcasting: 5 I0705 11:10:33.612465 6 log.go:172] (0xc0009e7ce0) Reply frame received for 5 I0705 11:10:33.702973 6 log.go:172] (0xc0009e7ce0) Data frame received for 3 I0705 11:10:33.703006 6 log.go:172] (0xc000ffe8c0) (3) Data frame handling I0705 11:10:33.703025 6 log.go:172] (0xc000ffe8c0) (3) Data frame sent I0705 11:10:33.703565 6 log.go:172] (0xc0009e7ce0) Data frame received for 3 I0705 11:10:33.703611 6 log.go:172] (0xc000ffe8c0) (3) Data frame handling I0705 11:10:33.703654 6 log.go:172] (0xc0009e7ce0) Data frame received for 5 I0705 11:10:33.703670 6 log.go:172] (0xc000ffe960) (5) Data frame handling I0705 11:10:33.706100 6 log.go:172] (0xc0009e7ce0) Data frame received for 1 I0705 11:10:33.706128 6 log.go:172] (0xc000398460) (1) Data frame handling I0705 11:10:33.706157 6 log.go:172] (0xc000398460) (1) Data frame sent I0705 11:10:33.706174 6 log.go:172] (0xc0009e7ce0) (0xc000398460) Stream removed, broadcasting: 1 I0705 11:10:33.706198 6 log.go:172] (0xc0009e7ce0) Go away received I0705 11:10:33.706295 6 log.go:172] (0xc0009e7ce0) (0xc000398460) Stream removed, broadcasting: 1 I0705 11:10:33.706331 6 log.go:172] (0xc0009e7ce0) (0xc000ffe8c0) Stream removed, broadcasting: 3 I0705 11:10:33.706344 6 log.go:172] (0xc0009e7ce0) (0xc000ffe960) Stream removed, broadcasting: 5 Jul 5 11:10:33.706: INFO: Waiting for endpoints: map[] Jul 5 11:10:33.709: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.134:8080/dial?request=hostName&protocol=udp&host=10.244.2.133&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-9nhxt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 11:10:33.709: INFO: >>> kubeConfig: /root/.kube/config I0705 11:10:33.737665 6 log.go:172] (0xc0000ead10) (0xc0010717c0) Create stream I0705 11:10:33.737700 6 log.go:172] (0xc0000ead10) (0xc0010717c0) Stream added, broadcasting: 1 I0705 11:10:33.739522 6 log.go:172] (0xc0000ead10) Reply frame received for 1 I0705 11:10:33.739595 6 log.go:172] (0xc0000ead10) (0xc000398500) Create stream I0705 11:10:33.739617 6 log.go:172] (0xc0000ead10) (0xc000398500) Stream added, broadcasting: 3 I0705 11:10:33.740641 6 log.go:172] (0xc0000ead10) Reply frame received for 3 I0705 11:10:33.740683 6 log.go:172] (0xc0000ead10) (0xc001071860) Create stream I0705 11:10:33.740700 6 log.go:172] (0xc0000ead10) (0xc001071860) Stream added, broadcasting: 5 I0705 11:10:33.741947 6 log.go:172] (0xc0000ead10) Reply frame received for 5 I0705 11:10:33.815808 6 log.go:172] (0xc0000ead10) Data frame received for 3 I0705 11:10:33.815849 6 log.go:172] (0xc000398500) (3) Data frame handling I0705 11:10:33.815877 6 log.go:172] (0xc000398500) (3) Data frame sent I0705 11:10:33.816572 6 log.go:172] (0xc0000ead10) Data frame received for 3 I0705 11:10:33.816645 6 log.go:172] (0xc000398500) (3) Data frame handling I0705 11:10:33.817051 6 log.go:172] (0xc0000ead10) Data frame received for 5 I0705 11:10:33.817078 6 log.go:172] (0xc001071860) (5) Data frame handling I0705 11:10:33.822340 6 log.go:172] (0xc0000ead10) Data frame received for 1 I0705 11:10:33.822370 6 log.go:172] (0xc0010717c0) (1) Data frame handling I0705 11:10:33.822389 6 log.go:172] (0xc0010717c0) (1) Data frame sent I0705 11:10:33.822405 6 log.go:172] (0xc0000ead10) (0xc0010717c0) Stream removed, broadcasting: 1 I0705 11:10:33.822461 6 log.go:172] (0xc0000ead10) Go away received I0705 11:10:33.822498 6 log.go:172] (0xc0000ead10) (0xc0010717c0) Stream removed, broadcasting: 1 I0705 11:10:33.822527 6 log.go:172] (0xc0000ead10) (0xc000398500) Stream removed, broadcasting: 3 I0705 11:10:33.822549 6 log.go:172] (0xc0000ead10) (0xc001071860) Stream removed, broadcasting: 5 Jul 5 11:10:33.822: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:10:33.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9nhxt" for this suite. Jul 5 11:10:55.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:10:55.702: INFO: namespace: e2e-tests-pod-network-test-9nhxt, resource: bindings, ignored listing per whitelist Jul 5 11:10:55.735: INFO: namespace e2e-tests-pod-network-test-9nhxt deletion completed in 21.909100567s • [SLOW TEST:44.435 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:10:55.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jul 5 11:10:56.367: INFO: created pod pod-service-account-defaultsa Jul 5 11:10:56.367: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 5 11:10:56.400: INFO: created pod pod-service-account-mountsa Jul 5 11:10:56.400: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 5 11:10:56.424: INFO: created pod pod-service-account-nomountsa Jul 5 11:10:56.424: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 5 11:10:56.452: INFO: created pod pod-service-account-defaultsa-mountspec Jul 5 11:10:56.452: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 5 11:10:56.479: INFO: created pod pod-service-account-mountsa-mountspec Jul 5 11:10:56.479: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 5 11:10:56.544: INFO: created pod pod-service-account-nomountsa-mountspec Jul 5 11:10:56.544: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 5 11:10:56.558: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 5 11:10:56.558: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 5 11:10:56.601: INFO: created pod pod-service-account-mountsa-nomountspec Jul 5 11:10:56.601: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 5 11:10:56.608: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 5 11:10:56.608: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:10:56.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-wqvpc" for this suite. Jul 5 11:11:26.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:11:26.826: INFO: namespace: e2e-tests-svcaccounts-wqvpc, resource: bindings, ignored listing per whitelist Jul 5 11:11:26.883: INFO: namespace e2e-tests-svcaccounts-wqvpc deletion completed in 30.250992943s • [SLOW TEST:31.147 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:11:26.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 5 11:11:31.075: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-450d5af8-beb0-11ea-9e48-0242ac110017,GenerateName:,Namespace:e2e-tests-events-ck4bt,SelfLink:/api/v1/namespaces/e2e-tests-events-ck4bt/pods/send-events-450d5af8-beb0-11ea-9e48-0242ac110017,UID:45108d82-beb0-11ea-a300-0242ac110004,ResourceVersion:223313,Generation:0,CreationTimestamp:2020-07-05 11:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 24621416,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sj59f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sj59f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-sj59f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022cd230} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022cd250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:11:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:11:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:11:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:11:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.78,StartTime:2020-07-05 11:11:27 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-05 11:11:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://240c392106028b3c34c76402275cb856b9afe77e48f893acbcf89bbd45da2a73}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jul 5 11:11:33.083: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 5 11:11:35.086: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:11:35.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-ck4bt" for this suite. Jul 5 11:12:15.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:12:15.228: INFO: namespace: e2e-tests-events-ck4bt, resource: bindings, ignored listing per whitelist Jul 5 11:12:15.243: INFO: namespace e2e-tests-events-ck4bt deletion completed in 40.129439841s • [SLOW TEST:48.360 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:12:15.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0705 11:12:45.880210 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 5 11:12:45.880: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:12:45.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4mlfn" for this suite. Jul 5 11:12:54.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:12:54.597: INFO: namespace: e2e-tests-gc-4mlfn, resource: bindings, ignored listing per whitelist Jul 5 11:12:54.630: INFO: namespace e2e-tests-gc-4mlfn deletion completed in 8.746803411s • [SLOW TEST:39.386 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:12:54.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-7964c4f3-beb0-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume configMaps Jul 5 11:12:55.004: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-796ad785-beb0-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-w7h7d" to be "success or failure" Jul 5 11:12:55.042: INFO: Pod "pod-projected-configmaps-796ad785-beb0-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 38.164822ms Jul 5 11:12:57.114: INFO: Pod "pod-projected-configmaps-796ad785-beb0-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110729185s Jul 5 11:12:59.126: INFO: Pod "pod-projected-configmaps-796ad785-beb0-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122450273s STEP: Saw pod success Jul 5 11:12:59.126: INFO: Pod "pod-projected-configmaps-796ad785-beb0-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:12:59.128: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-796ad785-beb0-11ea-9e48-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod Jul 5 11:12:59.202: INFO: Waiting for pod pod-projected-configmaps-796ad785-beb0-11ea-9e48-0242ac110017 to disappear Jul 5 11:12:59.224: INFO: Pod pod-projected-configmaps-796ad785-beb0-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:12:59.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w7h7d" for this suite. Jul 5 11:13:05.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:13:05.391: INFO: namespace: e2e-tests-projected-w7h7d, resource: bindings, ignored listing per whitelist Jul 5 11:13:05.445: INFO: namespace e2e-tests-projected-w7h7d deletion completed in 6.218214448s • [SLOW TEST:10.815 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:13:05.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:13:05.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-czwmj" for this suite. Jul 5 11:16:31.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:16:31.715: INFO: namespace: e2e-tests-pods-czwmj, resource: bindings, ignored listing per whitelist Jul 5 11:16:31.776: INFO: namespace e2e-tests-pods-czwmj deletion completed in 3m26.146162183s • [SLOW TEST:206.331 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:16:31.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 5 11:16:32.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jul 5 11:16:32.809: INFO: stderr: "" Jul 5 11:16:32.809: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-05T09:49:20Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:16:32.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rp666" for this suite. Jul 5 11:16:38.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:16:38.988: INFO: namespace: e2e-tests-kubectl-rp666, resource: bindings, ignored listing per whitelist Jul 5 11:16:39.014: INFO: namespace e2e-tests-kubectl-rp666 deletion completed in 6.161024982s • [SLOW TEST:7.238 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:16:39.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-d72j8 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jul 5 11:16:39.182: INFO: Found 0 stateful pods, waiting for 3 Jul 5 11:16:49.224: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:16:49.224: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:16:49.224: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 5 11:16:59.187: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:16:59.187: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:16:59.187: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 5 11:16:59.216: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 5 11:17:09.286: INFO: Updating stateful set ss2 Jul 5 11:17:09.318: INFO: Waiting for Pod e2e-tests-statefulset-d72j8/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jul 5 11:17:19.997: INFO: Found 2 stateful pods, waiting for 3 Jul 5 11:17:30.002: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:17:30.002: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:17:30.002: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 5 11:17:30.027: INFO: Updating stateful set ss2 Jul 5 11:17:30.120: INFO: Waiting for Pod e2e-tests-statefulset-d72j8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 5 11:17:40.129: INFO: Waiting for Pod e2e-tests-statefulset-d72j8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 5 11:17:50.147: INFO: Updating stateful set ss2 Jul 5 11:17:50.158: INFO: Waiting for StatefulSet e2e-tests-statefulset-d72j8/ss2 to complete update Jul 5 11:17:50.158: INFO: Waiting for Pod e2e-tests-statefulset-d72j8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 5 11:18:00.166: INFO: Waiting for StatefulSet e2e-tests-statefulset-d72j8/ss2 to complete update Jul 5 11:18:00.166: INFO: Waiting for Pod e2e-tests-statefulset-d72j8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 5 11:18:10.167: INFO: Deleting all statefulset in ns e2e-tests-statefulset-d72j8 Jul 5 11:18:10.170: INFO: Scaling statefulset ss2 to 0 Jul 5 11:18:40.434: INFO: Waiting for statefulset status.replicas updated to 0 Jul 5 11:18:40.437: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:18:40.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-d72j8" for this suite. Jul 5 11:18:48.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:18:48.538: INFO: namespace: e2e-tests-statefulset-d72j8, resource: bindings, ignored listing per whitelist Jul 5 11:18:48.573: INFO: namespace e2e-tests-statefulset-d72j8 deletion completed in 8.118877188s • [SLOW TEST:129.558 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:18:48.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jul 5 11:18:49.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-44sn5' Jul 5 11:18:53.049: INFO: stderr: "" Jul 5 11:18:53.049: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 5 11:18:54.640: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:18:54.640: INFO: Found 0 / 1 Jul 5 11:18:55.053: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:18:55.053: INFO: Found 0 / 1 Jul 5 11:18:56.053: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:18:56.053: INFO: Found 0 / 1 Jul 5 11:18:57.054: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:18:57.054: INFO: Found 0 / 1 Jul 5 11:18:58.054: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:18:58.054: INFO: Found 1 / 1 Jul 5 11:18:58.054: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jul 5 11:18:58.057: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:18:58.057: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 5 11:18:58.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-k7ckj --namespace=e2e-tests-kubectl-44sn5 -p {"metadata":{"annotations":{"x":"y"}}}' Jul 5 11:18:58.160: INFO: stderr: "" Jul 5 11:18:58.160: INFO: stdout: "pod/redis-master-k7ckj patched\n" STEP: checking annotations Jul 5 11:18:58.167: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:18:58.167: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:18:58.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-44sn5" for this suite. Jul 5 11:19:20.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:19:20.254: INFO: namespace: e2e-tests-kubectl-44sn5, resource: bindings, ignored listing per whitelist Jul 5 11:19:20.284: INFO: namespace e2e-tests-kubectl-44sn5 deletion completed in 22.113395952s • [SLOW TEST:31.711 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:19:20.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 5 11:19:20.389: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f30cbd0-beb1-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-m2fr7" to be "success or failure" Jul 5 11:19:20.393: INFO: Pod "downwardapi-volume-5f30cbd0-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.542085ms Jul 5 11:19:22.455: INFO: Pod "downwardapi-volume-5f30cbd0-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065100091s Jul 5 11:19:24.560: INFO: Pod "downwardapi-volume-5f30cbd0-beb1-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.170660265s STEP: Saw pod success Jul 5 11:19:24.560: INFO: Pod "downwardapi-volume-5f30cbd0-beb1-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:19:24.563: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5f30cbd0-beb1-11ea-9e48-0242ac110017 container client-container: STEP: delete the pod Jul 5 11:19:24.604: INFO: Waiting for pod downwardapi-volume-5f30cbd0-beb1-11ea-9e48-0242ac110017 to disappear Jul 5 11:19:24.756: INFO: Pod downwardapi-volume-5f30cbd0-beb1-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:19:24.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m2fr7" for this suite. Jul 5 11:19:32.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:19:32.815: INFO: namespace: e2e-tests-projected-m2fr7, resource: bindings, ignored listing per whitelist Jul 5 11:19:32.866: INFO: namespace e2e-tests-projected-m2fr7 deletion completed in 8.106729135s • [SLOW TEST:12.582 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:19:32.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 5 11:19:33.104: INFO: Waiting up to 5m0s for pod "downward-api-66c3d638-beb1-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-fmfgt" to be "success or failure" Jul 5 11:19:33.120: INFO: Pod "downward-api-66c3d638-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.177712ms Jul 5 11:19:35.124: INFO: Pod "downward-api-66c3d638-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019169284s Jul 5 11:19:37.271: INFO: Pod "downward-api-66c3d638-beb1-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.166126711s Jul 5 11:19:39.275: INFO: Pod "downward-api-66c3d638-beb1-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.170741444s STEP: Saw pod success Jul 5 11:19:39.275: INFO: Pod "downward-api-66c3d638-beb1-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:19:39.279: INFO: Trying to get logs from node hunter-worker2 pod downward-api-66c3d638-beb1-11ea-9e48-0242ac110017 container dapi-container: STEP: delete the pod Jul 5 11:19:39.317: INFO: Waiting for pod downward-api-66c3d638-beb1-11ea-9e48-0242ac110017 to disappear Jul 5 11:19:39.336: INFO: Pod downward-api-66c3d638-beb1-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:19:39.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fmfgt" for this suite. Jul 5 11:19:45.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:19:45.456: INFO: namespace: e2e-tests-downward-api-fmfgt, resource: bindings, ignored listing per whitelist Jul 5 11:19:45.474: INFO: namespace e2e-tests-downward-api-fmfgt deletion completed in 6.134053406s • [SLOW TEST:12.607 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:19:45.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-6e3ba30a-beb1-11ea-9e48-0242ac110017 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:19:51.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-trt5v" for this suite. Jul 5 11:20:13.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:20:14.014: INFO: namespace: e2e-tests-configmap-trt5v, resource: bindings, ignored listing per whitelist Jul 5 11:20:14.024: INFO: namespace e2e-tests-configmap-trt5v deletion completed in 22.250369343s • [SLOW TEST:28.550 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:20:14.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 5 11:20:14.144: INFO: PodSpec: initContainers in spec.initContainers Jul 5 11:21:06.004: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7f3d733c-beb1-11ea-9e48-0242ac110017", GenerateName:"", Namespace:"e2e-tests-init-container-zsxr8", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-zsxr8/pods/pod-init-7f3d733c-beb1-11ea-9e48-0242ac110017", UID:"7f40f213-beb1-11ea-a300-0242ac110004", ResourceVersion:"224923", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729544814, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"144395409"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bv6k6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00176eb40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bv6k6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bv6k6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bv6k6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a555e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001354360), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a556c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a556e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001a556e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001a556ec)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729544814, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729544814, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729544814, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729544814, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.2.148", StartTime:(*v1.Time)(0xc000c30e60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000c30ea0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00035ae00)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://7348f69bd80e26a84b23103279cf0a2ca4ff1fb8e0cd4347049f05468f5c572a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c30ec0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c30e80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:21:06.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-zsxr8" for this suite. Jul 5 11:21:28.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:21:28.039: INFO: namespace: e2e-tests-init-container-zsxr8, resource: bindings, ignored listing per whitelist Jul 5 11:21:28.302: INFO: namespace e2e-tests-init-container-zsxr8 deletion completed in 22.289401269s • [SLOW TEST:74.278 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:21:28.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 5 11:21:28.474: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 5 11:21:33.555: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 5 11:21:33.556: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 5 11:21:33.736: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-644q7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-644q7/deployments/test-cleanup-deployment,UID:ae936ea9-beb1-11ea-a300-0242ac110004,ResourceVersion:225012,Generation:1,CreationTimestamp:2020-07-05 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jul 5 11:21:33.739: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:21:33.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-644q7" for this suite. Jul 5 11:21:42.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:21:42.160: INFO: namespace: e2e-tests-deployment-644q7, resource: bindings, ignored listing per whitelist Jul 5 11:21:42.274: INFO: namespace e2e-tests-deployment-644q7 deletion completed in 8.498994037s • [SLOW TEST:13.971 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:21:42.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-b3d56a96-beb1-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume configMaps Jul 5 11:21:42.394: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3d61e4a-beb1-11ea-9e48-0242ac110017" in namespace "e2e-tests-configmap-rx5sc" to be "success or failure" Jul 5 11:21:42.413: INFO: Pod "pod-configmaps-b3d61e4a-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.242515ms Jul 5 11:21:44.418: INFO: Pod "pod-configmaps-b3d61e4a-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024666774s Jul 5 11:21:46.422: INFO: Pod "pod-configmaps-b3d61e4a-beb1-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028486356s STEP: Saw pod success Jul 5 11:21:46.422: INFO: Pod "pod-configmaps-b3d61e4a-beb1-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:21:46.425: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-b3d61e4a-beb1-11ea-9e48-0242ac110017 container configmap-volume-test: STEP: delete the pod Jul 5 11:21:46.447: INFO: Waiting for pod pod-configmaps-b3d61e4a-beb1-11ea-9e48-0242ac110017 to disappear Jul 5 11:21:46.451: INFO: Pod pod-configmaps-b3d61e4a-beb1-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:21:46.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rx5sc" for this suite. Jul 5 11:21:52.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:21:52.502: INFO: namespace: e2e-tests-configmap-rx5sc, resource: bindings, ignored listing per whitelist Jul 5 11:21:52.560: INFO: namespace e2e-tests-configmap-rx5sc deletion completed in 6.106413279s • [SLOW TEST:10.286 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:21:52.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jul 5 11:21:52.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-66llj' Jul 5 11:21:53.041: INFO: stderr: "" Jul 5 11:21:53.041: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jul 5 11:21:54.046: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:21:54.046: INFO: Found 0 / 1 Jul 5 11:21:55.125: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:21:55.125: INFO: Found 0 / 1 Jul 5 11:21:56.053: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:21:56.053: INFO: Found 0 / 1 Jul 5 11:21:57.046: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:21:57.046: INFO: Found 1 / 1 Jul 5 11:21:57.046: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 5 11:21:57.051: INFO: Selector matched 1 pods for map[app:redis] Jul 5 11:21:57.051: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jul 5 11:21:57.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bw9f2 redis-master --namespace=e2e-tests-kubectl-66llj' Jul 5 11:21:57.167: INFO: stderr: "" Jul 5 11:21:57.167: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 05 Jul 11:21:56.215 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jul 11:21:56.215 # Server started, Redis version 3.2.12\n1:M 05 Jul 11:21:56.215 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jul 11:21:56.215 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jul 5 11:21:57.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bw9f2 redis-master --namespace=e2e-tests-kubectl-66llj --tail=1' Jul 5 11:21:57.284: INFO: stderr: "" Jul 5 11:21:57.284: INFO: stdout: "1:M 05 Jul 11:21:56.215 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jul 5 11:21:57.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bw9f2 redis-master --namespace=e2e-tests-kubectl-66llj --limit-bytes=1' Jul 5 11:21:57.393: INFO: stderr: "" Jul 5 11:21:57.393: INFO: stdout: " " STEP: exposing timestamps Jul 5 11:21:57.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bw9f2 redis-master --namespace=e2e-tests-kubectl-66llj --tail=1 --timestamps' Jul 5 11:21:57.499: INFO: stderr: "" Jul 5 11:21:57.499: INFO: stdout: "2020-07-05T11:21:56.215368989Z 1:M 05 Jul 11:21:56.215 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jul 5 11:21:59.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bw9f2 redis-master --namespace=e2e-tests-kubectl-66llj --since=1s' Jul 5 11:22:00.118: INFO: stderr: "" Jul 5 11:22:00.118: INFO: stdout: "" Jul 5 11:22:00.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bw9f2 redis-master --namespace=e2e-tests-kubectl-66llj --since=24h' Jul 5 11:22:00.235: INFO: stderr: "" Jul 5 11:22:00.235: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 05 Jul 11:21:56.215 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jul 11:21:56.215 # Server started, Redis version 3.2.12\n1:M 05 Jul 11:21:56.215 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jul 11:21:56.215 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jul 5 11:22:00.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-66llj' Jul 5 11:22:00.363: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 5 11:22:00.363: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jul 5 11:22:00.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-66llj' Jul 5 11:22:00.467: INFO: stderr: "No resources found.\n" Jul 5 11:22:00.467: INFO: stdout: "" Jul 5 11:22:00.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-66llj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 5 11:22:00.563: INFO: stderr: "" Jul 5 11:22:00.563: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:22:00.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-66llj" for this suite. Jul 5 11:22:22.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:22:23.143: INFO: namespace: e2e-tests-kubectl-66llj, resource: bindings, ignored listing per whitelist Jul 5 11:22:23.167: INFO: namespace e2e-tests-kubectl-66llj deletion completed in 22.600892911s • [SLOW TEST:30.607 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:22:23.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 5 11:22:23.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc50d673-beb1-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-lrdw7" to be "success or failure" Jul 5 11:22:23.524: INFO: Pod "downwardapi-volume-cc50d673-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 62.891093ms Jul 5 11:22:25.528: INFO: Pod "downwardapi-volume-cc50d673-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066862923s Jul 5 11:22:27.737: INFO: Pod "downwardapi-volume-cc50d673-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.275125692s Jul 5 11:22:29.742: INFO: Pod "downwardapi-volume-cc50d673-beb1-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.280174635s STEP: Saw pod success Jul 5 11:22:29.742: INFO: Pod "downwardapi-volume-cc50d673-beb1-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:22:29.745: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cc50d673-beb1-11ea-9e48-0242ac110017 container client-container: STEP: delete the pod Jul 5 11:22:30.149: INFO: Waiting for pod downwardapi-volume-cc50d673-beb1-11ea-9e48-0242ac110017 to disappear Jul 5 11:22:30.185: INFO: Pod downwardapi-volume-cc50d673-beb1-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:22:30.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lrdw7" for this suite. Jul 5 11:22:36.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:22:36.245: INFO: namespace: e2e-tests-projected-lrdw7, resource: bindings, ignored listing per whitelist Jul 5 11:22:36.288: INFO: namespace e2e-tests-projected-lrdw7 deletion completed in 6.099852617s • [SLOW TEST:13.121 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:22:36.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 5 11:22:36.599: INFO: Waiting up to 5m0s for pod "pod-d42293ce-beb1-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-wtbdp" to be "success or failure" Jul 5 11:22:36.745: INFO: Pod "pod-d42293ce-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 145.891754ms Jul 5 11:22:38.748: INFO: Pod "pod-d42293ce-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149518892s Jul 5 11:22:40.753: INFO: Pod "pod-d42293ce-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154634934s Jul 5 11:22:42.757: INFO: Pod "pod-d42293ce-beb1-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.158652031s STEP: Saw pod success Jul 5 11:22:42.757: INFO: Pod "pod-d42293ce-beb1-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:22:42.759: INFO: Trying to get logs from node hunter-worker2 pod pod-d42293ce-beb1-11ea-9e48-0242ac110017 container test-container: STEP: delete the pod Jul 5 11:22:42.777: INFO: Waiting for pod pod-d42293ce-beb1-11ea-9e48-0242ac110017 to disappear Jul 5 11:22:42.849: INFO: Pod pod-d42293ce-beb1-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:22:42.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wtbdp" for this suite. Jul 5 11:22:48.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:22:48.921: INFO: namespace: e2e-tests-emptydir-wtbdp, resource: bindings, ignored listing per whitelist Jul 5 11:22:48.954: INFO: namespace e2e-tests-emptydir-wtbdp deletion completed in 6.100134696s • [SLOW TEST:12.664 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:22:48.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-db922037-beb1-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume secrets Jul 5 11:22:49.164: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-db945606-beb1-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-fl7t7" to be "success or failure" Jul 5 11:22:49.367: INFO: Pod "pod-projected-secrets-db945606-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 203.520822ms Jul 5 11:22:51.372: INFO: Pod "pod-projected-secrets-db945606-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207646805s Jul 5 11:22:53.375: INFO: Pod "pod-projected-secrets-db945606-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211149302s Jul 5 11:22:55.379: INFO: Pod "pod-projected-secrets-db945606-beb1-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215100605s STEP: Saw pod success Jul 5 11:22:55.379: INFO: Pod "pod-projected-secrets-db945606-beb1-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:22:55.382: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-db945606-beb1-11ea-9e48-0242ac110017 container projected-secret-volume-test: STEP: delete the pod Jul 5 11:22:55.503: INFO: Waiting for pod pod-projected-secrets-db945606-beb1-11ea-9e48-0242ac110017 to disappear Jul 5 11:22:55.536: INFO: Pod pod-projected-secrets-db945606-beb1-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:22:55.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fl7t7" for this suite. Jul 5 11:23:01.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:23:01.670: INFO: namespace: e2e-tests-projected-fl7t7, resource: bindings, ignored listing per whitelist Jul 5 11:23:01.720: INFO: namespace e2e-tests-projected-fl7t7 deletion completed in 6.179483637s • [SLOW TEST:12.766 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:23:01.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 5 11:23:01.801: INFO: Waiting up to 5m0s for pod "pod-e32acdb2-beb1-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-d8h92" to be "success or failure" Jul 5 11:23:01.844: INFO: Pod "pod-e32acdb2-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 42.524986ms Jul 5 11:23:03.849: INFO: Pod "pod-e32acdb2-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047269894s Jul 5 11:23:05.854: INFO: Pod "pod-e32acdb2-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052344452s Jul 5 11:23:07.858: INFO: Pod "pod-e32acdb2-beb1-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05657969s STEP: Saw pod success Jul 5 11:23:07.858: INFO: Pod "pod-e32acdb2-beb1-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:23:07.861: INFO: Trying to get logs from node hunter-worker2 pod pod-e32acdb2-beb1-11ea-9e48-0242ac110017 container test-container: STEP: delete the pod Jul 5 11:23:07.902: INFO: Waiting for pod pod-e32acdb2-beb1-11ea-9e48-0242ac110017 to disappear Jul 5 11:23:07.914: INFO: Pod pod-e32acdb2-beb1-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:23:07.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d8h92" for this suite. Jul 5 11:23:15.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:23:16.028: INFO: namespace: e2e-tests-emptydir-d8h92, resource: bindings, ignored listing per whitelist Jul 5 11:23:16.035: INFO: namespace e2e-tests-emptydir-d8h92 deletion completed in 8.094767037s • [SLOW TEST:14.315 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:23:16.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 5 11:23:16.286: INFO: Waiting up to 5m0s for pod "pod-ebc463b0-beb1-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-bj9n5" to be "success or failure" Jul 5 11:23:16.568: INFO: Pod "pod-ebc463b0-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 281.366503ms Jul 5 11:23:18.572: INFO: Pod "pod-ebc463b0-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285303422s Jul 5 11:23:20.575: INFO: Pod "pod-ebc463b0-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289202084s Jul 5 11:23:22.604: INFO: Pod "pod-ebc463b0-beb1-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.317532066s STEP: Saw pod success Jul 5 11:23:22.604: INFO: Pod "pod-ebc463b0-beb1-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:23:22.607: INFO: Trying to get logs from node hunter-worker2 pod pod-ebc463b0-beb1-11ea-9e48-0242ac110017 container test-container: STEP: delete the pod Jul 5 11:23:22.647: INFO: Waiting for pod pod-ebc463b0-beb1-11ea-9e48-0242ac110017 to disappear Jul 5 11:23:22.651: INFO: Pod pod-ebc463b0-beb1-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:23:22.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bj9n5" for this suite. Jul 5 11:23:28.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:23:28.845: INFO: namespace: e2e-tests-emptydir-bj9n5, resource: bindings, ignored listing per whitelist Jul 5 11:23:28.858: INFO: namespace e2e-tests-emptydir-bj9n5 deletion completed in 6.203407712s • [SLOW TEST:12.823 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:23:28.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f358a4b9-beb1-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume secrets Jul 5 11:23:28.988: INFO: Waiting up to 5m0s for pod "pod-secrets-f35dcd6b-beb1-11ea-9e48-0242ac110017" in namespace "e2e-tests-secrets-7mkn7" to be "success or failure" Jul 5 11:23:28.992: INFO: Pod "pod-secrets-f35dcd6b-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551331ms Jul 5 11:23:30.996: INFO: Pod "pod-secrets-f35dcd6b-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007372594s Jul 5 11:23:32.999: INFO: Pod "pod-secrets-f35dcd6b-beb1-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010868027s Jul 5 11:23:35.003: INFO: Pod "pod-secrets-f35dcd6b-beb1-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014767796s STEP: Saw pod success Jul 5 11:23:35.003: INFO: Pod "pod-secrets-f35dcd6b-beb1-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:23:35.005: INFO: Trying to get logs from node hunter-worker pod pod-secrets-f35dcd6b-beb1-11ea-9e48-0242ac110017 container secret-volume-test: STEP: delete the pod Jul 5 11:23:35.126: INFO: Waiting for pod pod-secrets-f35dcd6b-beb1-11ea-9e48-0242ac110017 to disappear Jul 5 11:23:35.160: INFO: Pod pod-secrets-f35dcd6b-beb1-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:23:35.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7mkn7" for this suite. Jul 5 11:23:41.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:23:41.211: INFO: namespace: e2e-tests-secrets-7mkn7, resource: bindings, ignored listing per whitelist Jul 5 11:23:41.253: INFO: namespace e2e-tests-secrets-7mkn7 deletion completed in 6.089498364s • [SLOW TEST:12.395 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:23:41.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:23:51.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-9jmkq" for this suite. Jul 5 11:24:13.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:24:13.516: INFO: namespace: e2e-tests-replication-controller-9jmkq, resource: bindings, ignored listing per whitelist Jul 5 11:24:13.560: INFO: namespace e2e-tests-replication-controller-9jmkq deletion completed in 22.102409408s • [SLOW TEST:32.307 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:24:13.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 5 11:24:13.727: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-d7mbv,SelfLink:/api/v1/namespaces/e2e-tests-watch-d7mbv/configmaps/e2e-watch-test-resource-version,UID:0e02be24-beb2-11ea-a300-0242ac110004,ResourceVersion:225589,Generation:0,CreationTimestamp:2020-07-05 11:24:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 5 11:24:13.727: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-d7mbv,SelfLink:/api/v1/namespaces/e2e-tests-watch-d7mbv/configmaps/e2e-watch-test-resource-version,UID:0e02be24-beb2-11ea-a300-0242ac110004,ResourceVersion:225590,Generation:0,CreationTimestamp:2020-07-05 11:24:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:24:13.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-d7mbv" for this suite. Jul 5 11:24:19.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:24:19.822: INFO: namespace: e2e-tests-watch-d7mbv, resource: bindings, ignored listing per whitelist Jul 5 11:24:19.822: INFO: namespace e2e-tests-watch-d7mbv deletion completed in 6.089107192s • [SLOW TEST:6.262 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:24:19.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-z4h4j Jul 5 11:24:23.951: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-z4h4j STEP: checking the pod's current state and verifying that restartCount is present Jul 5 11:24:23.954: INFO: Initial restart count of pod liveness-http is 0 Jul 5 11:24:35.981: INFO: Restart count of pod e2e-tests-container-probe-z4h4j/liveness-http is now 1 (12.027390396s elapsed) Jul 5 11:24:58.029: INFO: Restart count of pod e2e-tests-container-probe-z4h4j/liveness-http is now 2 (34.075431994s elapsed) Jul 5 11:25:19.068: INFO: Restart count of pod e2e-tests-container-probe-z4h4j/liveness-http is now 3 (55.114149046s elapsed) Jul 5 11:25:37.106: INFO: Restart count of pod e2e-tests-container-probe-z4h4j/liveness-http is now 4 (1m13.152219095s elapsed) Jul 5 11:25:57.186: INFO: Restart count of pod e2e-tests-container-probe-z4h4j/liveness-http is now 5 (1m33.231866424s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:25:57.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-z4h4j" for this suite. Jul 5 11:26:03.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:26:03.307: INFO: namespace: e2e-tests-container-probe-z4h4j, resource: bindings, ignored listing per whitelist Jul 5 11:26:03.307: INFO: namespace e2e-tests-container-probe-z4h4j deletion completed in 6.101596613s • [SLOW TEST:103.485 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:26:03.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4f699af0-beb2-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume secrets Jul 5 11:26:03.510: INFO: Waiting up to 5m0s for pod "pod-secrets-4f7711fd-beb2-11ea-9e48-0242ac110017" in namespace "e2e-tests-secrets-5v7rq" to be "success or failure" Jul 5 11:26:03.529: INFO: Pod "pod-secrets-4f7711fd-beb2-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.784215ms Jul 5 11:26:05.533: INFO: Pod "pod-secrets-4f7711fd-beb2-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022714815s Jul 5 11:26:07.536: INFO: Pod "pod-secrets-4f7711fd-beb2-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02577565s STEP: Saw pod success Jul 5 11:26:07.536: INFO: Pod "pod-secrets-4f7711fd-beb2-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:26:07.539: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4f7711fd-beb2-11ea-9e48-0242ac110017 container secret-volume-test: STEP: delete the pod Jul 5 11:26:07.578: INFO: Waiting for pod pod-secrets-4f7711fd-beb2-11ea-9e48-0242ac110017 to disappear Jul 5 11:26:07.610: INFO: Pod pod-secrets-4f7711fd-beb2-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:26:07.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5v7rq" for this suite. Jul 5 11:26:13.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:26:13.683: INFO: namespace: e2e-tests-secrets-5v7rq, resource: bindings, ignored listing per whitelist Jul 5 11:26:13.737: INFO: namespace e2e-tests-secrets-5v7rq deletion completed in 6.123160399s STEP: Destroying namespace "e2e-tests-secret-namespace-2k2rw" for this suite. Jul 5 11:26:19.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:26:19.779: INFO: namespace: e2e-tests-secret-namespace-2k2rw, resource: bindings, ignored listing per whitelist Jul 5 11:26:19.827: INFO: namespace e2e-tests-secret-namespace-2k2rw deletion completed in 6.090258226s • [SLOW TEST:16.520 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:26:19.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 5 11:26:24.458: INFO: Successfully updated pod "annotationupdate5941ba80-beb2-11ea-9e48-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:26:26.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kb9fr" for this suite. Jul 5 11:26:50.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:26:50.579: INFO: namespace: e2e-tests-projected-kb9fr, resource: bindings, ignored listing per whitelist Jul 5 11:26:50.590: INFO: namespace e2e-tests-projected-kb9fr deletion completed in 24.095927369s • [SLOW TEST:30.762 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:26:50.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0705 11:27:00.722857 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 5 11:27:00.722: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:27:00.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-gbz7z" for this suite. Jul 5 11:27:06.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:27:06.787: INFO: namespace: e2e-tests-gc-gbz7z, resource: bindings, ignored listing per whitelist Jul 5 11:27:06.844: INFO: namespace e2e-tests-gc-gbz7z deletion completed in 6.118240569s • [SLOW TEST:16.254 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:27:06.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 5 11:27:06.943: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 5 11:27:06.960: INFO: Waiting for terminating namespaces to be deleted... Jul 5 11:27:06.963: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jul 5 11:27:06.968: INFO: kube-proxy-cqbm8 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded) Jul 5 11:27:06.968: INFO: Container kube-proxy ready: true, restart count 0 Jul 5 11:27:06.968: INFO: kindnet-mcn92 from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded) Jul 5 11:27:06.968: INFO: Container kindnet-cni ready: true, restart count 0 Jul 5 11:27:06.968: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jul 5 11:27:06.992: INFO: coredns-54ff9cd656-mgg2q from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded) Jul 5 11:27:06.992: INFO: Container coredns ready: true, restart count 0 Jul 5 11:27:06.992: INFO: coredns-54ff9cd656-l7q92 from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded) Jul 5 11:27:06.992: INFO: Container coredns ready: true, restart count 0 Jul 5 11:27:06.992: INFO: kube-proxy-52vr2 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded) Jul 5 11:27:06.992: INFO: Container kube-proxy ready: true, restart count 0 Jul 5 11:27:06.992: INFO: kindnet-rll2b from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded) Jul 5 11:27:06.992: INFO: Container kindnet-cni ready: true, restart count 0 Jul 5 11:27:06.992: INFO: local-path-provisioner-674595c7-cvgpb from local-path-storage started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded) Jul 5 11:27:06.992: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Jul 5 11:27:07.061: INFO: Pod coredns-54ff9cd656-l7q92 requesting resource cpu=100m on Node hunter-worker2 Jul 5 11:27:07.061: INFO: Pod coredns-54ff9cd656-mgg2q requesting resource cpu=100m on Node hunter-worker2 Jul 5 11:27:07.061: INFO: Pod kindnet-mcn92 requesting resource cpu=100m on Node hunter-worker Jul 5 11:27:07.061: INFO: Pod kindnet-rll2b requesting resource cpu=100m on Node hunter-worker2 Jul 5 11:27:07.061: INFO: Pod kube-proxy-52vr2 requesting resource cpu=0m on Node hunter-worker2 Jul 5 11:27:07.061: INFO: Pod kube-proxy-cqbm8 requesting resource cpu=0m on Node hunter-worker Jul 5 11:27:07.061: INFO: Pod local-path-provisioner-674595c7-cvgpb requesting resource cpu=0m on Node hunter-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-755b9c18-beb2-11ea-9e48-0242ac110017.161ed7a63a1eb3f3], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-wklwf/filler-pod-755b9c18-beb2-11ea-9e48-0242ac110017 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-755b9c18-beb2-11ea-9e48-0242ac110017.161ed7a687182d18], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-755b9c18-beb2-11ea-9e48-0242ac110017.161ed7a6c9b753c3], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-755b9c18-beb2-11ea-9e48-0242ac110017.161ed7a6e325568a], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-755c9f96-beb2-11ea-9e48-0242ac110017.161ed7a63d361353], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-wklwf/filler-pod-755c9f96-beb2-11ea-9e48-0242ac110017 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-755c9f96-beb2-11ea-9e48-0242ac110017.161ed7a6cc59604c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-755c9f96-beb2-11ea-9e48-0242ac110017.161ed7a6fbe06f34], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-755c9f96-beb2-11ea-9e48-0242ac110017.161ed7a70b45bf45], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.161ed7a72cc4bf1d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:27:12.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-wklwf" for this suite. Jul 5 11:27:20.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:27:20.457: INFO: namespace: e2e-tests-sched-pred-wklwf, resource: bindings, ignored listing per whitelist Jul 5 11:27:20.469: INFO: namespace e2e-tests-sched-pred-wklwf deletion completed in 8.171470044s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.625 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:27:20.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jul 5 11:27:24.677: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:27:47.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-cpmxk" for this suite. Jul 5 11:27:53.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:27:53.823: INFO: namespace: e2e-tests-namespaces-cpmxk, resource: bindings, ignored listing per whitelist Jul 5 11:27:53.867: INFO: namespace e2e-tests-namespaces-cpmxk deletion completed in 6.08936634s STEP: Destroying namespace "e2e-tests-nsdeletetest-g96mj" for this suite. Jul 5 11:27:53.869: INFO: Namespace e2e-tests-nsdeletetest-g96mj was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-c25wf" for this suite. Jul 5 11:27:59.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:27:59.940: INFO: namespace: e2e-tests-nsdeletetest-c25wf, resource: bindings, ignored listing per whitelist Jul 5 11:27:59.966: INFO: namespace e2e-tests-nsdeletetest-c25wf deletion completed in 6.096848605s • [SLOW TEST:39.497 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:27:59.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 5 11:28:08.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:08.168: INFO: Pod pod-with-poststart-exec-hook still exists Jul 5 11:28:10.169: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:10.174: INFO: Pod pod-with-poststart-exec-hook still exists Jul 5 11:28:12.169: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:12.173: INFO: Pod pod-with-poststart-exec-hook still exists Jul 5 11:28:14.169: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:14.174: INFO: Pod pod-with-poststart-exec-hook still exists Jul 5 11:28:16.169: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:16.174: INFO: Pod pod-with-poststart-exec-hook still exists Jul 5 11:28:18.169: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:18.174: INFO: Pod pod-with-poststart-exec-hook still exists Jul 5 11:28:20.169: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:20.174: INFO: Pod pod-with-poststart-exec-hook still exists Jul 5 11:28:22.169: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:22.173: INFO: Pod pod-with-poststart-exec-hook still exists Jul 5 11:28:24.169: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:24.172: INFO: Pod pod-with-poststart-exec-hook still exists Jul 5 11:28:26.169: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 5 11:28:26.174: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:28:26.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gjwbr" for this suite. Jul 5 11:28:48.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:28:48.242: INFO: namespace: e2e-tests-container-lifecycle-hook-gjwbr, resource: bindings, ignored listing per whitelist Jul 5 11:28:48.262: INFO: namespace e2e-tests-container-lifecycle-hook-gjwbr deletion completed in 22.083321842s • [SLOW TEST:48.295 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:28:48.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qwrg7 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jul 5 11:28:48.413: INFO: Found 0 stateful pods, waiting for 3 Jul 5 11:28:58.419: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:28:58.419: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:28:58.419: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 5 11:29:08.419: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:29:08.419: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:29:08.419: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 5 11:29:08.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qwrg7 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 5 11:29:08.697: INFO: stderr: "I0705 11:29:08.557078 1247 log.go:172] (0xc000138840) (0xc000736640) Create stream\nI0705 11:29:08.557323 1247 log.go:172] (0xc000138840) (0xc000736640) Stream added, broadcasting: 1\nI0705 11:29:08.559921 1247 log.go:172] (0xc000138840) Reply frame received for 1\nI0705 11:29:08.559968 1247 log.go:172] (0xc000138840) (0xc000326c80) Create stream\nI0705 11:29:08.559991 1247 log.go:172] (0xc000138840) (0xc000326c80) Stream added, broadcasting: 3\nI0705 11:29:08.560994 1247 log.go:172] (0xc000138840) Reply frame received for 3\nI0705 11:29:08.561041 1247 log.go:172] (0xc000138840) (0xc0007da000) Create stream\nI0705 11:29:08.561057 1247 log.go:172] (0xc000138840) (0xc0007da000) Stream added, broadcasting: 5\nI0705 11:29:08.562245 1247 log.go:172] (0xc000138840) Reply frame received for 5\nI0705 11:29:08.689647 1247 log.go:172] (0xc000138840) Data frame received for 3\nI0705 11:29:08.689699 1247 log.go:172] (0xc000326c80) (3) Data frame handling\nI0705 11:29:08.689746 1247 log.go:172] (0xc000326c80) (3) Data frame sent\nI0705 11:29:08.689975 1247 log.go:172] (0xc000138840) Data frame received for 3\nI0705 11:29:08.689998 1247 log.go:172] (0xc000326c80) (3) Data frame handling\nI0705 11:29:08.690364 1247 log.go:172] (0xc000138840) Data frame received for 5\nI0705 11:29:08.690381 1247 log.go:172] (0xc0007da000) (5) Data frame handling\nI0705 11:29:08.692469 1247 log.go:172] (0xc000138840) Data frame received for 1\nI0705 11:29:08.692484 1247 log.go:172] (0xc000736640) (1) Data frame handling\nI0705 11:29:08.692513 1247 log.go:172] (0xc000736640) (1) Data frame sent\nI0705 11:29:08.692592 1247 log.go:172] (0xc000138840) (0xc000736640) Stream removed, broadcasting: 1\nI0705 11:29:08.692614 1247 log.go:172] (0xc000138840) Go away received\nI0705 11:29:08.692889 1247 log.go:172] (0xc000138840) (0xc000736640) Stream removed, broadcasting: 1\nI0705 11:29:08.692921 1247 log.go:172] (0xc000138840) (0xc000326c80) Stream removed, broadcasting: 3\nI0705 11:29:08.692936 1247 log.go:172] (0xc000138840) (0xc0007da000) Stream removed, broadcasting: 5\n" Jul 5 11:29:08.697: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 5 11:29:08.697: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 5 11:29:18.757: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 5 11:29:28.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qwrg7 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 5 11:29:29.039: INFO: stderr: "I0705 11:29:28.940700 1270 log.go:172] (0xc0001386e0) (0xc00060b360) Create stream\nI0705 11:29:28.940778 1270 log.go:172] (0xc0001386e0) (0xc00060b360) Stream added, broadcasting: 1\nI0705 11:29:28.943690 1270 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0705 11:29:28.943739 1270 log.go:172] (0xc0001386e0) (0xc000530000) Create stream\nI0705 11:29:28.943754 1270 log.go:172] (0xc0001386e0) (0xc000530000) Stream added, broadcasting: 3\nI0705 11:29:28.944833 1270 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0705 11:29:28.944900 1270 log.go:172] (0xc0001386e0) (0xc00051a000) Create stream\nI0705 11:29:28.944920 1270 log.go:172] (0xc0001386e0) (0xc00051a000) Stream added, broadcasting: 5\nI0705 11:29:28.946214 1270 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0705 11:29:29.034133 1270 log.go:172] (0xc0001386e0) Data frame received for 3\nI0705 11:29:29.034181 1270 log.go:172] (0xc000530000) (3) Data frame handling\nI0705 11:29:29.034212 1270 log.go:172] (0xc000530000) (3) Data frame sent\nI0705 11:29:29.034228 1270 log.go:172] (0xc0001386e0) Data frame received for 3\nI0705 11:29:29.034238 1270 log.go:172] (0xc000530000) (3) Data frame handling\nI0705 11:29:29.034277 1270 log.go:172] (0xc0001386e0) Data frame received for 5\nI0705 11:29:29.034310 1270 log.go:172] (0xc00051a000) (5) Data frame handling\nI0705 11:29:29.035865 1270 log.go:172] (0xc0001386e0) Data frame received for 1\nI0705 11:29:29.035895 1270 log.go:172] (0xc00060b360) (1) Data frame handling\nI0705 11:29:29.035916 1270 log.go:172] (0xc00060b360) (1) Data frame sent\nI0705 11:29:29.035936 1270 log.go:172] (0xc0001386e0) (0xc00060b360) Stream removed, broadcasting: 1\nI0705 11:29:29.035957 1270 log.go:172] (0xc0001386e0) Go away received\nI0705 11:29:29.036243 1270 log.go:172] (0xc0001386e0) (0xc00060b360) Stream removed, broadcasting: 1\nI0705 11:29:29.036270 1270 log.go:172] (0xc0001386e0) (0xc000530000) Stream removed, broadcasting: 3\nI0705 11:29:29.036300 1270 log.go:172] (0xc0001386e0) (0xc00051a000) Stream removed, broadcasting: 5\n" Jul 5 11:29:29.039: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 5 11:29:29.039: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 5 11:29:49.103: INFO: Waiting for StatefulSet e2e-tests-statefulset-qwrg7/ss2 to complete update Jul 5 11:29:49.103: INFO: Waiting for Pod e2e-tests-statefulset-qwrg7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jul 5 11:29:59.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qwrg7 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 5 11:29:59.383: INFO: stderr: "I0705 11:29:59.228265 1293 log.go:172] (0xc0006f82c0) (0xc0005708c0) Create stream\nI0705 11:29:59.228372 1293 log.go:172] (0xc0006f82c0) (0xc0005708c0) Stream added, broadcasting: 1\nI0705 11:29:59.232105 1293 log.go:172] (0xc0006f82c0) Reply frame received for 1\nI0705 11:29:59.232170 1293 log.go:172] (0xc0006f82c0) (0xc000570000) Create stream\nI0705 11:29:59.232200 1293 log.go:172] (0xc0006f82c0) (0xc000570000) Stream added, broadcasting: 3\nI0705 11:29:59.233040 1293 log.go:172] (0xc0006f82c0) Reply frame received for 3\nI0705 11:29:59.233073 1293 log.go:172] (0xc0006f82c0) (0xc0005d8dc0) Create stream\nI0705 11:29:59.233083 1293 log.go:172] (0xc0006f82c0) (0xc0005d8dc0) Stream added, broadcasting: 5\nI0705 11:29:59.234096 1293 log.go:172] (0xc0006f82c0) Reply frame received for 5\nI0705 11:29:59.376412 1293 log.go:172] (0xc0006f82c0) Data frame received for 5\nI0705 11:29:59.376454 1293 log.go:172] (0xc0005d8dc0) (5) Data frame handling\nI0705 11:29:59.376482 1293 log.go:172] (0xc0006f82c0) Data frame received for 3\nI0705 11:29:59.376493 1293 log.go:172] (0xc000570000) (3) Data frame handling\nI0705 11:29:59.376505 1293 log.go:172] (0xc000570000) (3) Data frame sent\nI0705 11:29:59.376516 1293 log.go:172] (0xc0006f82c0) Data frame received for 3\nI0705 11:29:59.376526 1293 log.go:172] (0xc000570000) (3) Data frame handling\nI0705 11:29:59.378516 1293 log.go:172] (0xc0006f82c0) Data frame received for 1\nI0705 11:29:59.378560 1293 log.go:172] (0xc0005708c0) (1) Data frame handling\nI0705 11:29:59.378613 1293 log.go:172] (0xc0005708c0) (1) Data frame sent\nI0705 11:29:59.378655 1293 log.go:172] (0xc0006f82c0) (0xc0005708c0) Stream removed, broadcasting: 1\nI0705 11:29:59.378686 1293 log.go:172] (0xc0006f82c0) Go away received\nI0705 11:29:59.379045 1293 log.go:172] (0xc0006f82c0) (0xc0005708c0) Stream removed, broadcasting: 1\nI0705 11:29:59.379081 1293 log.go:172] (0xc0006f82c0) (0xc000570000) Stream removed, broadcasting: 3\nI0705 11:29:59.379102 1293 log.go:172] (0xc0006f82c0) (0xc0005d8dc0) Stream removed, broadcasting: 5\n" Jul 5 11:29:59.383: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 5 11:29:59.384: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 5 11:30:09.418: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 5 11:30:19.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qwrg7 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 5 11:30:19.702: INFO: stderr: "I0705 11:30:19.628206 1315 log.go:172] (0xc0001386e0) (0xc000661360) Create stream\nI0705 11:30:19.628288 1315 log.go:172] (0xc0001386e0) (0xc000661360) Stream added, broadcasting: 1\nI0705 11:30:19.631869 1315 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0705 11:30:19.631915 1315 log.go:172] (0xc0001386e0) (0xc0003ec000) Create stream\nI0705 11:30:19.631932 1315 log.go:172] (0xc0001386e0) (0xc0003ec000) Stream added, broadcasting: 3\nI0705 11:30:19.633381 1315 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0705 11:30:19.633442 1315 log.go:172] (0xc0001386e0) (0xc0003f0000) Create stream\nI0705 11:30:19.633455 1315 log.go:172] (0xc0001386e0) (0xc0003f0000) Stream added, broadcasting: 5\nI0705 11:30:19.634412 1315 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0705 11:30:19.697952 1315 log.go:172] (0xc0001386e0) Data frame received for 3\nI0705 11:30:19.697980 1315 log.go:172] (0xc0003ec000) (3) Data frame handling\nI0705 11:30:19.697987 1315 log.go:172] (0xc0003ec000) (3) Data frame sent\nI0705 11:30:19.697992 1315 log.go:172] (0xc0001386e0) Data frame received for 3\nI0705 11:30:19.697996 1315 log.go:172] (0xc0003ec000) (3) Data frame handling\nI0705 11:30:19.698084 1315 log.go:172] (0xc0001386e0) Data frame received for 5\nI0705 11:30:19.698118 1315 log.go:172] (0xc0003f0000) (5) Data frame handling\nI0705 11:30:19.699357 1315 log.go:172] (0xc0001386e0) Data frame received for 1\nI0705 11:30:19.699378 1315 log.go:172] (0xc000661360) (1) Data frame handling\nI0705 11:30:19.699410 1315 log.go:172] (0xc000661360) (1) Data frame sent\nI0705 11:30:19.699435 1315 log.go:172] (0xc0001386e0) (0xc000661360) Stream removed, broadcasting: 1\nI0705 11:30:19.699452 1315 log.go:172] (0xc0001386e0) Go away received\nI0705 11:30:19.699597 1315 log.go:172] (0xc0001386e0) (0xc000661360) Stream removed, broadcasting: 1\nI0705 11:30:19.699618 1315 log.go:172] (0xc0001386e0) (0xc0003ec000) Stream removed, broadcasting: 3\nI0705 11:30:19.699627 1315 log.go:172] (0xc0001386e0) (0xc0003f0000) Stream removed, broadcasting: 5\n" Jul 5 11:30:19.703: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 5 11:30:19.703: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 5 11:30:29.754: INFO: Waiting for StatefulSet e2e-tests-statefulset-qwrg7/ss2 to complete update Jul 5 11:30:29.754: INFO: Waiting for Pod e2e-tests-statefulset-qwrg7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 5 11:30:29.754: INFO: Waiting for Pod e2e-tests-statefulset-qwrg7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 5 11:30:29.754: INFO: Waiting for Pod e2e-tests-statefulset-qwrg7/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 5 11:30:39.762: INFO: Waiting for StatefulSet e2e-tests-statefulset-qwrg7/ss2 to complete update Jul 5 11:30:39.762: INFO: Waiting for Pod e2e-tests-statefulset-qwrg7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 5 11:30:39.762: INFO: Waiting for Pod e2e-tests-statefulset-qwrg7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 5 11:30:49.763: INFO: Waiting for StatefulSet e2e-tests-statefulset-qwrg7/ss2 to complete update Jul 5 11:30:49.763: INFO: Waiting for Pod e2e-tests-statefulset-qwrg7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 5 11:30:59.762: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qwrg7 Jul 5 11:30:59.765: INFO: Scaling statefulset ss2 to 0 Jul 5 11:31:19.782: INFO: Waiting for statefulset status.replicas updated to 0 Jul 5 11:31:19.786: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:31:19.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qwrg7" for this suite. Jul 5 11:31:25.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:31:25.916: INFO: namespace: e2e-tests-statefulset-qwrg7, resource: bindings, ignored listing per whitelist Jul 5 11:31:25.923: INFO: namespace e2e-tests-statefulset-qwrg7 deletion completed in 6.1095665s • [SLOW TEST:157.661 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:31:25.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:32:26.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vgmfn" for this suite. Jul 5 11:32:48.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:32:48.082: INFO: namespace: e2e-tests-container-probe-vgmfn, resource: bindings, ignored listing per whitelist Jul 5 11:32:48.112: INFO: namespace e2e-tests-container-probe-vgmfn deletion completed in 22.081204407s • [SLOW TEST:82.188 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:32:48.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 5 11:32:48.329: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dprhr,SelfLink:/api/v1/namespaces/e2e-tests-watch-dprhr/configmaps/e2e-watch-test-label-changed,UID:40bf51e5-beb3-11ea-a300-0242ac110004,ResourceVersion:227277,Generation:0,CreationTimestamp:2020-07-05 11:32:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 5 11:32:48.329: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dprhr,SelfLink:/api/v1/namespaces/e2e-tests-watch-dprhr/configmaps/e2e-watch-test-label-changed,UID:40bf51e5-beb3-11ea-a300-0242ac110004,ResourceVersion:227278,Generation:0,CreationTimestamp:2020-07-05 11:32:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 5 11:32:48.329: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dprhr,SelfLink:/api/v1/namespaces/e2e-tests-watch-dprhr/configmaps/e2e-watch-test-label-changed,UID:40bf51e5-beb3-11ea-a300-0242ac110004,ResourceVersion:227279,Generation:0,CreationTimestamp:2020-07-05 11:32:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 5 11:32:58.373: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dprhr,SelfLink:/api/v1/namespaces/e2e-tests-watch-dprhr/configmaps/e2e-watch-test-label-changed,UID:40bf51e5-beb3-11ea-a300-0242ac110004,ResourceVersion:227300,Generation:0,CreationTimestamp:2020-07-05 11:32:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 5 11:32:58.374: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dprhr,SelfLink:/api/v1/namespaces/e2e-tests-watch-dprhr/configmaps/e2e-watch-test-label-changed,UID:40bf51e5-beb3-11ea-a300-0242ac110004,ResourceVersion:227301,Generation:0,CreationTimestamp:2020-07-05 11:32:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jul 5 11:32:58.374: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dprhr,SelfLink:/api/v1/namespaces/e2e-tests-watch-dprhr/configmaps/e2e-watch-test-label-changed,UID:40bf51e5-beb3-11ea-a300-0242ac110004,ResourceVersion:227302,Generation:0,CreationTimestamp:2020-07-05 11:32:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:32:58.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-dprhr" for this suite. Jul 5 11:33:04.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:33:04.427: INFO: namespace: e2e-tests-watch-dprhr, resource: bindings, ignored listing per whitelist Jul 5 11:33:04.605: INFO: namespace e2e-tests-watch-dprhr deletion completed in 6.218219788s • [SLOW TEST:16.494 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:33:04.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 5 11:33:04.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-crlxz' Jul 5 11:33:07.616: INFO: stderr: "" Jul 5 11:33:07.616: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 5 11:33:12.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-crlxz -o json' Jul 5 11:33:12.772: INFO: stderr: "" Jul 5 11:33:12.772: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-05T11:33:07Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-crlxz\",\n \"resourceVersion\": \"227342\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-crlxz/pods/e2e-test-nginx-pod\",\n \"uid\": \"4c41a485-beb3-11ea-a300-0242ac110004\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-grz9k\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-grz9k\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-grz9k\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-05T11:33:07Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-05T11:33:10Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-05T11:33:10Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-05T11:33:07Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://6b77ca93a6a00a763a2aa5e1e029754db1edae624aef96e25afe683131afd797\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-05T11:33:09Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.2\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.108\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-05T11:33:07Z\"\n }\n}\n" STEP: replace the image in the pod Jul 5 11:33:12.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-crlxz' Jul 5 11:33:13.105: INFO: stderr: "" Jul 5 11:33:13.105: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jul 5 11:33:13.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-crlxz' Jul 5 11:33:18.454: INFO: stderr: "" Jul 5 11:33:18.454: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:33:18.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-crlxz" for this suite. Jul 5 11:33:24.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:33:24.503: INFO: namespace: e2e-tests-kubectl-crlxz, resource: bindings, ignored listing per whitelist Jul 5 11:33:24.536: INFO: namespace e2e-tests-kubectl-crlxz deletion completed in 6.078180759s • [SLOW TEST:19.931 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:33:24.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:33:56.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-z75kg" for this suite. Jul 5 11:34:02.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:34:02.555: INFO: namespace: e2e-tests-container-runtime-z75kg, resource: bindings, ignored listing per whitelist Jul 5 11:34:02.581: INFO: namespace e2e-tests-container-runtime-z75kg deletion completed in 6.094622744s • [SLOW TEST:38.045 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:34:02.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 5 11:34:02.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d1d4a31-beb3-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-dnfsw" to be "success or failure" Jul 5 11:34:02.741: INFO: Pod "downwardapi-volume-6d1d4a31-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552659ms Jul 5 11:34:04.932: INFO: Pod "downwardapi-volume-6d1d4a31-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196258525s Jul 5 11:34:06.936: INFO: Pod "downwardapi-volume-6d1d4a31-beb3-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199965755s STEP: Saw pod success Jul 5 11:34:06.936: INFO: Pod "downwardapi-volume-6d1d4a31-beb3-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:34:06.939: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6d1d4a31-beb3-11ea-9e48-0242ac110017 container client-container: STEP: delete the pod Jul 5 11:34:06.955: INFO: Waiting for pod downwardapi-volume-6d1d4a31-beb3-11ea-9e48-0242ac110017 to disappear Jul 5 11:34:06.966: INFO: Pod downwardapi-volume-6d1d4a31-beb3-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:34:06.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dnfsw" for this suite. Jul 5 11:34:12.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:34:13.062: INFO: namespace: e2e-tests-downward-api-dnfsw, resource: bindings, ignored listing per whitelist Jul 5 11:34:13.067: INFO: namespace e2e-tests-downward-api-dnfsw deletion completed in 6.097591415s • [SLOW TEST:10.485 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:34:13.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jul 5 11:34:13.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:13.584: INFO: stderr: "" Jul 5 11:34:13.584: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 5 11:34:13.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:13.711: INFO: stderr: "" Jul 5 11:34:13.711: INFO: stdout: "update-demo-nautilus-q9jw4 update-demo-nautilus-xtwr2 " Jul 5 11:34:13.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q9jw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:13.819: INFO: stderr: "" Jul 5 11:34:13.819: INFO: stdout: "" Jul 5 11:34:13.819: INFO: update-demo-nautilus-q9jw4 is created but not running Jul 5 11:34:18.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:18.942: INFO: stderr: "" Jul 5 11:34:18.942: INFO: stdout: "update-demo-nautilus-q9jw4 update-demo-nautilus-xtwr2 " Jul 5 11:34:18.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q9jw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:19.038: INFO: stderr: "" Jul 5 11:34:19.038: INFO: stdout: "true" Jul 5 11:34:19.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q9jw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:19.141: INFO: stderr: "" Jul 5 11:34:19.141: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 5 11:34:19.141: INFO: validating pod update-demo-nautilus-q9jw4 Jul 5 11:34:19.145: INFO: got data: { "image": "nautilus.jpg" } Jul 5 11:34:19.145: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 5 11:34:19.145: INFO: update-demo-nautilus-q9jw4 is verified up and running Jul 5 11:34:19.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtwr2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:19.243: INFO: stderr: "" Jul 5 11:34:19.243: INFO: stdout: "true" Jul 5 11:34:19.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtwr2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:19.341: INFO: stderr: "" Jul 5 11:34:19.341: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 5 11:34:19.341: INFO: validating pod update-demo-nautilus-xtwr2 Jul 5 11:34:19.346: INFO: got data: { "image": "nautilus.jpg" } Jul 5 11:34:19.346: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 5 11:34:19.346: INFO: update-demo-nautilus-xtwr2 is verified up and running STEP: rolling-update to new replication controller Jul 5 11:34:19.348: INFO: scanned /root for discovery docs: Jul 5 11:34:19.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:41.935: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 5 11:34:41.935: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 5 11:34:41.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:42.040: INFO: stderr: "" Jul 5 11:34:42.040: INFO: stdout: "update-demo-kitten-cvlvk update-demo-kitten-gl6fr update-demo-nautilus-q9jw4 " STEP: Replicas for name=update-demo: expected=2 actual=3 Jul 5 11:34:47.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:47.160: INFO: stderr: "" Jul 5 11:34:47.160: INFO: stdout: "update-demo-kitten-cvlvk update-demo-kitten-gl6fr " Jul 5 11:34:47.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cvlvk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:47.251: INFO: stderr: "" Jul 5 11:34:47.251: INFO: stdout: "true" Jul 5 11:34:47.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cvlvk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:47.344: INFO: stderr: "" Jul 5 11:34:47.344: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 5 11:34:47.344: INFO: validating pod update-demo-kitten-cvlvk Jul 5 11:34:47.348: INFO: got data: { "image": "kitten.jpg" } Jul 5 11:34:47.348: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 5 11:34:47.348: INFO: update-demo-kitten-cvlvk is verified up and running Jul 5 11:34:47.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gl6fr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:47.445: INFO: stderr: "" Jul 5 11:34:47.445: INFO: stdout: "true" Jul 5 11:34:47.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gl6fr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9vx7' Jul 5 11:34:47.538: INFO: stderr: "" Jul 5 11:34:47.538: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 5 11:34:47.538: INFO: validating pod update-demo-kitten-gl6fr Jul 5 11:34:47.543: INFO: got data: { "image": "kitten.jpg" } Jul 5 11:34:47.543: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 5 11:34:47.543: INFO: update-demo-kitten-gl6fr is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:34:47.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r9vx7" for this suite. Jul 5 11:35:11.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:35:11.629: INFO: namespace: e2e-tests-kubectl-r9vx7, resource: bindings, ignored listing per whitelist Jul 5 11:35:11.640: INFO: namespace e2e-tests-kubectl-r9vx7 deletion completed in 24.093412211s • [SLOW TEST:58.573 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:35:11.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-963f173a-beb3-11ea-9e48-0242ac110017 STEP: Creating a pod to test consume configMaps Jul 5 11:35:11.750: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-963fc917-beb3-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-8r24h" to be "success or failure" Jul 5 11:35:11.770: INFO: Pod "pod-projected-configmaps-963fc917-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 20.050736ms Jul 5 11:35:13.774: INFO: Pod "pod-projected-configmaps-963fc917-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023859499s Jul 5 11:35:15.777: INFO: Pod "pod-projected-configmaps-963fc917-beb3-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027008315s STEP: Saw pod success Jul 5 11:35:15.778: INFO: Pod "pod-projected-configmaps-963fc917-beb3-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:35:15.781: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-963fc917-beb3-11ea-9e48-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod Jul 5 11:35:16.011: INFO: Waiting for pod pod-projected-configmaps-963fc917-beb3-11ea-9e48-0242ac110017 to disappear Jul 5 11:35:16.030: INFO: Pod pod-projected-configmaps-963fc917-beb3-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:35:16.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8r24h" for this suite. Jul 5 11:35:24.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:35:24.134: INFO: namespace: e2e-tests-projected-8r24h, resource: bindings, ignored listing per whitelist Jul 5 11:35:24.141: INFO: namespace e2e-tests-projected-8r24h deletion completed in 8.105169711s • [SLOW TEST:12.500 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:35:24.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 5 11:35:24.285: INFO: Waiting up to 5m0s for pod "pod-9db459b4-beb3-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-zktxx" to be "success or failure" Jul 5 11:35:24.294: INFO: Pod "pod-9db459b4-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726952ms Jul 5 11:35:26.298: INFO: Pod "pod-9db459b4-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013391334s Jul 5 11:35:28.302: INFO: Pod "pod-9db459b4-beb3-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016820407s STEP: Saw pod success Jul 5 11:35:28.302: INFO: Pod "pod-9db459b4-beb3-11ea-9e48-0242ac110017" satisfied condition "success or failure" Jul 5 11:35:28.304: INFO: Trying to get logs from node hunter-worker pod pod-9db459b4-beb3-11ea-9e48-0242ac110017 container test-container: STEP: delete the pod Jul 5 11:35:28.323: INFO: Waiting for pod pod-9db459b4-beb3-11ea-9e48-0242ac110017 to disappear Jul 5 11:35:28.327: INFO: Pod pod-9db459b4-beb3-11ea-9e48-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 5 11:35:28.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zktxx" for this suite. Jul 5 11:35:34.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 11:35:34.403: INFO: namespace: e2e-tests-emptydir-zktxx, resource: bindings, ignored listing per whitelist Jul 5 11:35:34.435: INFO: namespace e2e-tests-emptydir-zktxx deletion completed in 6.10414754s • [SLOW TEST:10.294 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 5 11:35:34.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 5 11:35:34.543: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:35:40.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-4rfxn" for this suite.
Jul  5 11:35:46.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:35:46.970: INFO: namespace: e2e-tests-services-4rfxn, resource: bindings, ignored listing per whitelist
Jul  5 11:35:47.049: INFO: namespace e2e-tests-services-4rfxn deletion completed in 6.120910282s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.275 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:35:47.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:35:51.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-rt6q8" for this suite.
Jul  5 11:35:57.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:35:57.256: INFO: namespace: e2e-tests-kubelet-test-rt6q8, resource: bindings, ignored listing per whitelist
Jul  5 11:35:57.292: INFO: namespace e2e-tests-kubelet-test-rt6q8 deletion completed in 6.09107702s

• [SLOW TEST:10.243 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:35:57.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  5 11:35:57.431: INFO: Waiting up to 5m0s for pod "pod-b179209f-beb3-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-7xb2g" to be "success or failure"
Jul  5 11:35:57.448: INFO: Pod "pod-b179209f-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.428319ms
Jul  5 11:35:59.496: INFO: Pod "pod-b179209f-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064807407s
Jul  5 11:36:01.500: INFO: Pod "pod-b179209f-beb3-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068233084s
STEP: Saw pod success
Jul  5 11:36:01.500: INFO: Pod "pod-b179209f-beb3-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:36:01.503: INFO: Trying to get logs from node hunter-worker2 pod pod-b179209f-beb3-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 11:36:01.527: INFO: Waiting for pod pod-b179209f-beb3-11ea-9e48-0242ac110017 to disappear
Jul  5 11:36:01.532: INFO: Pod pod-b179209f-beb3-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:36:01.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7xb2g" for this suite.
Jul  5 11:36:08.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:36:08.048: INFO: namespace: e2e-tests-emptydir-7xb2g, resource: bindings, ignored listing per whitelist
Jul  5 11:36:08.082: INFO: namespace e2e-tests-emptydir-7xb2g deletion completed in 6.547519495s

• [SLOW TEST:10.790 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:36:08.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b824b788-beb3-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 11:36:08.636: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8253bad-beb3-11ea-9e48-0242ac110017" in namespace "e2e-tests-configmap-ht9mj" to be "success or failure"
Jul  5 11:36:08.686: INFO: Pod "pod-configmaps-b8253bad-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 49.594897ms
Jul  5 11:36:10.690: INFO: Pod "pod-configmaps-b8253bad-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053415889s
Jul  5 11:36:12.694: INFO: Pod "pod-configmaps-b8253bad-beb3-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057989191s
STEP: Saw pod success
Jul  5 11:36:12.694: INFO: Pod "pod-configmaps-b8253bad-beb3-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:36:12.697: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-b8253bad-beb3-11ea-9e48-0242ac110017 container configmap-volume-test: 
STEP: delete the pod
Jul  5 11:36:12.715: INFO: Waiting for pod pod-configmaps-b8253bad-beb3-11ea-9e48-0242ac110017 to disappear
Jul  5 11:36:12.719: INFO: Pod pod-configmaps-b8253bad-beb3-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:36:12.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ht9mj" for this suite.
Jul  5 11:36:18.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:36:18.866: INFO: namespace: e2e-tests-configmap-ht9mj, resource: bindings, ignored listing per whitelist
Jul  5 11:36:18.874: INFO: namespace e2e-tests-configmap-ht9mj deletion completed in 6.12552997s

• [SLOW TEST:10.791 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:36:18.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jul  5 11:36:23.018: INFO: Pod pod-hostip-be4dfd25-beb3-11ea-9e48-0242ac110017 has hostIP: 172.17.0.2
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:36:23.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vt8qd" for this suite.
Jul  5 11:36:45.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:36:45.096: INFO: namespace: e2e-tests-pods-vt8qd, resource: bindings, ignored listing per whitelist
Jul  5 11:36:45.103: INFO: namespace e2e-tests-pods-vt8qd deletion completed in 22.081306234s

• [SLOW TEST:26.229 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:36:45.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 11:36:45.283: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul  5 11:36:45.290: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:45.292: INFO: Number of nodes with available pods: 0
Jul  5 11:36:45.292: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 11:36:46.297: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:46.300: INFO: Number of nodes with available pods: 0
Jul  5 11:36:46.300: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 11:36:47.564: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:47.567: INFO: Number of nodes with available pods: 0
Jul  5 11:36:47.567: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 11:36:48.348: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:48.351: INFO: Number of nodes with available pods: 0
Jul  5 11:36:48.351: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 11:36:49.298: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:49.301: INFO: Number of nodes with available pods: 1
Jul  5 11:36:49.301: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 11:36:50.297: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:50.300: INFO: Number of nodes with available pods: 2
Jul  5 11:36:50.300: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul  5 11:36:50.347: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:50.347: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:50.387: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:51.413: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:51.413: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:51.418: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:52.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:52.392: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:52.392: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:36:52.398: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:53.391: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:53.391: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:53.391: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:36:53.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:54.391: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:54.391: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:54.391: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:36:54.412: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:55.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:55.392: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:55.392: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:36:55.397: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:56.391: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:56.392: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:56.392: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:36:56.395: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:57.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:57.392: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:57.392: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:36:57.397: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:58.391: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:58.391: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:58.391: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:36:58.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:36:59.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:59.392: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:36:59.392: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:36:59.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:00.391: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:00.391: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:00.391: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:37:00.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:01.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:01.392: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:01.392: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:37:01.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:02.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:02.392: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:02.392: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:37:02.397: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:03.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:03.392: INFO: Wrong image for pod: daemon-set-xdzh5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:03.392: INFO: Pod daemon-set-xdzh5 is not available
Jul  5 11:37:03.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:04.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:04.392: INFO: Pod daemon-set-nkxbk is not available
Jul  5 11:37:04.397: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:05.391: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:05.391: INFO: Pod daemon-set-nkxbk is not available
Jul  5 11:37:05.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:06.391: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:06.392: INFO: Pod daemon-set-nkxbk is not available
Jul  5 11:37:06.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:07.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:07.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:08.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:08.392: INFO: Pod daemon-set-lpt9x is not available
Jul  5 11:37:08.397: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:09.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:09.392: INFO: Pod daemon-set-lpt9x is not available
Jul  5 11:37:09.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:10.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:10.392: INFO: Pod daemon-set-lpt9x is not available
Jul  5 11:37:10.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:11.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:11.392: INFO: Pod daemon-set-lpt9x is not available
Jul  5 11:37:11.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:12.402: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:12.402: INFO: Pod daemon-set-lpt9x is not available
Jul  5 11:37:12.409: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:13.392: INFO: Wrong image for pod: daemon-set-lpt9x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 11:37:13.392: INFO: Pod daemon-set-lpt9x is not available
Jul  5 11:37:13.397: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:14.392: INFO: Pod daemon-set-n85sc is not available
Jul  5 11:37:14.397: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul  5 11:37:14.402: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:14.405: INFO: Number of nodes with available pods: 1
Jul  5 11:37:14.405: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 11:37:15.411: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:15.414: INFO: Number of nodes with available pods: 1
Jul  5 11:37:15.414: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 11:37:16.499: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:16.511: INFO: Number of nodes with available pods: 1
Jul  5 11:37:16.511: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 11:37:17.411: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:17.415: INFO: Number of nodes with available pods: 1
Jul  5 11:37:17.415: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 11:37:18.411: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 11:37:18.415: INFO: Number of nodes with available pods: 2
Jul  5 11:37:18.415: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nwdgj, will wait for the garbage collector to delete the pods
Jul  5 11:37:18.491: INFO: Deleting DaemonSet.extensions daemon-set took: 7.515585ms
Jul  5 11:37:18.591: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.213874ms
Jul  5 11:37:23.695: INFO: Number of nodes with available pods: 0
Jul  5 11:37:23.695: INFO: Number of running nodes: 0, number of available pods: 0
Jul  5 11:37:23.698: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nwdgj/daemonsets","resourceVersion":"228307"},"items":null}

Jul  5 11:37:23.700: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nwdgj/pods","resourceVersion":"228307"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:37:23.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-nwdgj" for this suite.
Jul  5 11:37:29.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:37:29.877: INFO: namespace: e2e-tests-daemonsets-nwdgj, resource: bindings, ignored listing per whitelist
Jul  5 11:37:29.918: INFO: namespace e2e-tests-daemonsets-nwdgj deletion completed in 6.164953963s

• [SLOW TEST:44.815 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:37:29.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jul  5 11:37:30.033: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:37:30.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-d9nqr" for this suite.
Jul  5 11:37:36.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:37:36.606: INFO: namespace: e2e-tests-kubectl-d9nqr, resource: bindings, ignored listing per whitelist
Jul  5 11:37:36.652: INFO: namespace e2e-tests-kubectl-d9nqr deletion completed in 6.502218141s

• [SLOW TEST:6.734 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:37:36.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  5 11:37:37.059: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-p28zn" to be "success or failure"
Jul  5 11:37:37.174: INFO: Pod "downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 114.989507ms
Jul  5 11:37:39.178: INFO: Pod "downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119564131s
Jul  5 11:37:41.227: INFO: Pod "downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.168537179s
Jul  5 11:37:43.232: INFO: Pod "downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.172793675s
Jul  5 11:37:45.236: INFO: Pod "downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176714931s
STEP: Saw pod success
Jul  5 11:37:45.236: INFO: Pod "downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:37:45.239: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017 container client-container: 
STEP: delete the pod
Jul  5 11:37:45.273: INFO: Waiting for pod downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017 to disappear
Jul  5 11:37:45.305: INFO: Pod downwardapi-volume-ecd3fe77-beb3-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:37:45.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-p28zn" for this suite.
Jul  5 11:37:51.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:37:51.347: INFO: namespace: e2e-tests-downward-api-p28zn, resource: bindings, ignored listing per whitelist
Jul  5 11:37:51.403: INFO: namespace e2e-tests-downward-api-p28zn deletion completed in 6.093931204s

• [SLOW TEST:14.750 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:37:51.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  5 11:37:59.626: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  5 11:37:59.631: INFO: Pod pod-with-poststart-http-hook still exists
Jul  5 11:38:01.631: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  5 11:38:01.635: INFO: Pod pod-with-poststart-http-hook still exists
Jul  5 11:38:03.631: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  5 11:38:03.636: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:38:03.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-nn57w" for this suite.
Jul  5 11:38:29.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:38:29.712: INFO: namespace: e2e-tests-container-lifecycle-hook-nn57w, resource: bindings, ignored listing per whitelist
Jul  5 11:38:29.725: INFO: namespace e2e-tests-container-lifecycle-hook-nn57w deletion completed in 26.08513735s

• [SLOW TEST:38.322 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:38:29.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul  5 11:38:30.181: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  5 11:38:30.188: INFO: Waiting for terminating namespaces to be deleted...
Jul  5 11:38:30.190: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul  5 11:38:30.195: INFO: kindnet-mcn92 from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded)
Jul  5 11:38:30.195: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  5 11:38:30.195: INFO: kube-proxy-cqbm8 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded)
Jul  5 11:38:30.195: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  5 11:38:30.195: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul  5 11:38:30.202: INFO: coredns-54ff9cd656-mgg2q from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  5 11:38:30.202: INFO: 	Container coredns ready: true, restart count 0
Jul  5 11:38:30.202: INFO: coredns-54ff9cd656-l7q92 from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  5 11:38:30.202: INFO: 	Container coredns ready: true, restart count 0
Jul  5 11:38:30.202: INFO: kube-proxy-52vr2 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded)
Jul  5 11:38:30.202: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  5 11:38:30.202: INFO: kindnet-rll2b from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded)
Jul  5 11:38:30.202: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  5 11:38:30.202: INFO: local-path-provisioner-674595c7-cvgpb from local-path-storage started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  5 11:38:30.202: INFO: 	Container local-path-provisioner ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-0f06dcc8-beb4-11ea-9e48-0242ac110017 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-0f06dcc8-beb4-11ea-9e48-0242ac110017 off the node hunter-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-0f06dcc8-beb4-11ea-9e48-0242ac110017
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:38:40.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-b6cxc" for this suite.
Jul  5 11:38:58.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:38:58.942: INFO: namespace: e2e-tests-sched-pred-b6cxc, resource: bindings, ignored listing per whitelist
Jul  5 11:38:58.959: INFO: namespace e2e-tests-sched-pred-b6cxc deletion completed in 18.119363565s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:29.233 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:38:58.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 11:38:59.575: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul  5 11:39:04.845: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  5 11:39:04.845: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul  5 11:39:06.850: INFO: Creating deployment "test-rollover-deployment"
Jul  5 11:39:06.860: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul  5 11:39:08.867: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul  5 11:39:08.874: INFO: Ensure that both replica sets have 1 created replica
Jul  5 11:39:08.879: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul  5 11:39:08.887: INFO: Updating deployment test-rollover-deployment
Jul  5 11:39:08.887: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul  5 11:39:11.055: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul  5 11:39:11.085: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul  5 11:39:11.091: INFO: all replica sets need to contain the pod-template-hash label
Jul  5 11:39:11.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545949, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545946, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 11:39:13.098: INFO: all replica sets need to contain the pod-template-hash label
Jul  5 11:39:13.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545949, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545946, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 11:39:15.099: INFO: all replica sets need to contain the pod-template-hash label
Jul  5 11:39:15.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545953, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545946, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 11:39:17.100: INFO: all replica sets need to contain the pod-template-hash label
Jul  5 11:39:17.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545953, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545946, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 11:39:19.098: INFO: all replica sets need to contain the pod-template-hash label
Jul  5 11:39:19.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545953, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545946, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 11:39:21.099: INFO: all replica sets need to contain the pod-template-hash label
Jul  5 11:39:21.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545953, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545946, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 11:39:23.099: INFO: all replica sets need to contain the pod-template-hash label
Jul  5 11:39:23.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545947, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545953, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729545946, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 11:39:25.099: INFO: 
Jul  5 11:39:25.099: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  5 11:39:25.107: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-wsgph,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wsgph/deployments/test-rollover-deployment,UID:2262d7f2-beb4-11ea-a300-0242ac110004,ResourceVersion:228760,Generation:2,CreationTimestamp:2020-07-05 11:39:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-05 11:39:07 +0000 UTC 2020-07-05 11:39:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-05 11:39:23 +0000 UTC 2020-07-05 11:39:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul  5 11:39:25.110: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-wsgph,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wsgph/replicasets/test-rollover-deployment-5b8479fdb6,UID:2399a656-beb4-11ea-a300-0242ac110004,ResourceVersion:228751,Generation:2,CreationTimestamp:2020-07-05 11:39:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2262d7f2-beb4-11ea-a300-0242ac110004 0xc002527177 0xc002527178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul  5 11:39:25.110: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul  5 11:39:25.110: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-wsgph,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wsgph/replicasets/test-rollover-controller,UID:1deeb615-beb4-11ea-a300-0242ac110004,ResourceVersion:228759,Generation:2,CreationTimestamp:2020-07-05 11:38:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2262d7f2-beb4-11ea-a300-0242ac110004 0xc002526fe7 0xc002526fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 11:39:25.110: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-wsgph,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wsgph/replicasets/test-rollover-deployment-58494b7559,UID:2265a684-beb4-11ea-a300-0242ac110004,ResourceVersion:228715,Generation:2,CreationTimestamp:2020-07-05 11:39:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2262d7f2-beb4-11ea-a300-0242ac110004 0xc0025270a7 0xc0025270a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 11:39:25.114: INFO: Pod "test-rollover-deployment-5b8479fdb6-jr48z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-jr48z,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-wsgph,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wsgph/pods/test-rollover-deployment-5b8479fdb6-jr48z,UID:23b20da7-beb4-11ea-a300-0242ac110004,ResourceVersion:228729,Generation:0,CreationTimestamp:2020-07-05 11:39:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 2399a656-beb4-11ea-a300-0242ac110004 0xc00224e6b7 0xc00224e6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p8hjs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p8hjs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-p8hjs true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00224e730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00224e750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:39:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:39:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:39:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:39:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.2.177,StartTime:2020-07-05 11:39:09 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-05 11:39:12 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://2255b41cc4c86bdefc8f3e0ca8b8234fe0ba13211d6f4ae66fc88414e95e890f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:39:25.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-wsgph" for this suite.
Jul  5 11:39:33.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:39:33.174: INFO: namespace: e2e-tests-deployment-wsgph, resource: bindings, ignored listing per whitelist
Jul  5 11:39:33.209: INFO: namespace e2e-tests-deployment-wsgph deletion completed in 8.092483097s

• [SLOW TEST:34.250 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:39:33.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-n27xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n27xq to expose endpoints map[]
Jul  5 11:39:33.454: INFO: Get endpoints failed (19.289818ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul  5 11:39:34.468: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n27xq exposes endpoints map[] (1.033284337s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-n27xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n27xq to expose endpoints map[pod1:[80]]
Jul  5 11:39:37.518: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n27xq exposes endpoints map[pod1:[80]] (3.043930518s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-n27xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n27xq to expose endpoints map[pod1:[80] pod2:[80]]
Jul  5 11:39:41.599: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n27xq exposes endpoints map[pod1:[80] pod2:[80]] (4.076481307s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-n27xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n27xq to expose endpoints map[pod2:[80]]
Jul  5 11:39:42.647: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n27xq exposes endpoints map[pod2:[80]] (1.044226787s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-n27xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n27xq to expose endpoints map[]
Jul  5 11:39:43.678: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n27xq exposes endpoints map[] (1.025413947s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:39:43.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-n27xq" for this suite.
Jul  5 11:40:05.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:40:05.999: INFO: namespace: e2e-tests-services-n27xq, resource: bindings, ignored listing per whitelist
Jul  5 11:40:06.023: INFO: namespace e2e-tests-services-n27xq deletion completed in 22.100093793s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:32.813 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:40:06.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-45bd59ec-beb4-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume secrets
Jul  5 11:40:06.182: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-45bddc97-beb4-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-lczpw" to be "success or failure"
Jul  5 11:40:06.185: INFO: Pod "pod-projected-secrets-45bddc97-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.501625ms
Jul  5 11:40:08.247: INFO: Pod "pod-projected-secrets-45bddc97-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065690369s
Jul  5 11:40:10.251: INFO: Pod "pod-projected-secrets-45bddc97-beb4-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069017737s
STEP: Saw pod success
Jul  5 11:40:10.251: INFO: Pod "pod-projected-secrets-45bddc97-beb4-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:40:10.253: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-45bddc97-beb4-11ea-9e48-0242ac110017 container projected-secret-volume-test: 
STEP: delete the pod
Jul  5 11:40:10.291: INFO: Waiting for pod pod-projected-secrets-45bddc97-beb4-11ea-9e48-0242ac110017 to disappear
Jul  5 11:40:10.299: INFO: Pod pod-projected-secrets-45bddc97-beb4-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:40:10.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lczpw" for this suite.
Jul  5 11:40:16.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:40:16.329: INFO: namespace: e2e-tests-projected-lczpw, resource: bindings, ignored listing per whitelist
Jul  5 11:40:16.413: INFO: namespace e2e-tests-projected-lczpw deletion completed in 6.110065725s

• [SLOW TEST:10.390 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:40:16.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:40:20.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-6wt2q" for this suite.
Jul  5 11:40:26.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:40:26.734: INFO: namespace: e2e-tests-emptydir-wrapper-6wt2q, resource: bindings, ignored listing per whitelist
Jul  5 11:40:26.796: INFO: namespace e2e-tests-emptydir-wrapper-6wt2q deletion completed in 6.113056392s

• [SLOW TEST:10.383 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:40:26.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  5 11:40:26.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-521a2ed7-beb4-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-89zmf" to be "success or failure"
Jul  5 11:40:26.931: INFO: Pod "downwardapi-volume-521a2ed7-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.007463ms
Jul  5 11:40:28.935: INFO: Pod "downwardapi-volume-521a2ed7-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023497112s
Jul  5 11:40:30.939: INFO: Pod "downwardapi-volume-521a2ed7-beb4-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027437481s
STEP: Saw pod success
Jul  5 11:40:30.939: INFO: Pod "downwardapi-volume-521a2ed7-beb4-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:40:30.942: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-521a2ed7-beb4-11ea-9e48-0242ac110017 container client-container: 
STEP: delete the pod
Jul  5 11:40:30.981: INFO: Waiting for pod downwardapi-volume-521a2ed7-beb4-11ea-9e48-0242ac110017 to disappear
Jul  5 11:40:31.019: INFO: Pod downwardapi-volume-521a2ed7-beb4-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:40:31.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-89zmf" for this suite.
Jul  5 11:40:37.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:40:37.095: INFO: namespace: e2e-tests-projected-89zmf, resource: bindings, ignored listing per whitelist
Jul  5 11:40:37.110: INFO: namespace e2e-tests-projected-89zmf deletion completed in 6.087950814s

• [SLOW TEST:10.313 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:40:37.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jul  5 11:40:37.211: INFO: Waiting up to 5m0s for pod "client-containers-583dbccb-beb4-11ea-9e48-0242ac110017" in namespace "e2e-tests-containers-kh8wd" to be "success or failure"
Jul  5 11:40:37.245: INFO: Pod "client-containers-583dbccb-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 34.316398ms
Jul  5 11:40:39.313: INFO: Pod "client-containers-583dbccb-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101831827s
Jul  5 11:40:41.433: INFO: Pod "client-containers-583dbccb-beb4-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.221610032s
Jul  5 11:40:43.437: INFO: Pod "client-containers-583dbccb-beb4-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.226180127s
STEP: Saw pod success
Jul  5 11:40:43.437: INFO: Pod "client-containers-583dbccb-beb4-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:40:43.440: INFO: Trying to get logs from node hunter-worker2 pod client-containers-583dbccb-beb4-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 11:40:43.542: INFO: Waiting for pod client-containers-583dbccb-beb4-11ea-9e48-0242ac110017 to disappear
Jul  5 11:40:43.589: INFO: Pod client-containers-583dbccb-beb4-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:40:43.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-kh8wd" for this suite.
Jul  5 11:40:49.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:40:49.686: INFO: namespace: e2e-tests-containers-kh8wd, resource: bindings, ignored listing per whitelist
Jul  5 11:40:49.694: INFO: namespace e2e-tests-containers-kh8wd deletion completed in 6.100436672s

• [SLOW TEST:12.583 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:40:49.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jul  5 11:40:49.816: INFO: Waiting up to 5m0s for pod "var-expansion-5fc14554-beb4-11ea-9e48-0242ac110017" in namespace "e2e-tests-var-expansion-rbw4z" to be "success or failure"
Jul  5 11:40:49.841: INFO: Pod "var-expansion-5fc14554-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 24.996029ms
Jul  5 11:40:51.845: INFO: Pod "var-expansion-5fc14554-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028923923s
Jul  5 11:40:53.848: INFO: Pod "var-expansion-5fc14554-beb4-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032148941s
STEP: Saw pod success
Jul  5 11:40:53.848: INFO: Pod "var-expansion-5fc14554-beb4-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:40:53.851: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-5fc14554-beb4-11ea-9e48-0242ac110017 container dapi-container: 
STEP: delete the pod
Jul  5 11:40:53.943: INFO: Waiting for pod var-expansion-5fc14554-beb4-11ea-9e48-0242ac110017 to disappear
Jul  5 11:40:53.952: INFO: Pod var-expansion-5fc14554-beb4-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:40:53.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-rbw4z" for this suite.
Jul  5 11:40:59.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:40:59.981: INFO: namespace: e2e-tests-var-expansion-rbw4z, resource: bindings, ignored listing per whitelist
Jul  5 11:41:00.042: INFO: namespace e2e-tests-var-expansion-rbw4z deletion completed in 6.086029016s

• [SLOW TEST:10.348 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:41:00.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  5 11:41:00.144: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:41:06.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-r67mf" for this suite.
Jul  5 11:41:12.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:41:12.444: INFO: namespace: e2e-tests-init-container-r67mf, resource: bindings, ignored listing per whitelist
Jul  5 11:41:12.514: INFO: namespace e2e-tests-init-container-r67mf deletion completed in 6.109462708s

• [SLOW TEST:12.472 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:41:12.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 11:41:12.669: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-9p6qs
Jul  5 11:41:22.975: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-9p6qs
STEP: checking the pod's current state and verifying that restartCount is present
Jul  5 11:41:22.977: INFO: Initial restart count of pod liveness-http is 0
Jul  5 11:41:41.017: INFO: Restart count of pod e2e-tests-container-probe-9p6qs/liveness-http is now 1 (18.039927378s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:41:41.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9p6qs" for this suite.
Jul  5 11:41:47.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:41:47.090: INFO: namespace: e2e-tests-container-probe-9p6qs, resource: bindings, ignored listing per whitelist
Jul  5 11:41:47.142: INFO: namespace e2e-tests-container-probe-9p6qs deletion completed in 6.091160028s

• [SLOW TEST:28.314 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:41:47.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-81fe84b5-beb4-11ea-9e48-0242ac110017
STEP: Creating secret with name s-test-opt-upd-81fe852c-beb4-11ea-9e48-0242ac110017
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-81fe84b5-beb4-11ea-9e48-0242ac110017
STEP: Updating secret s-test-opt-upd-81fe852c-beb4-11ea-9e48-0242ac110017
STEP: Creating secret with name s-test-opt-create-81fe8555-beb4-11ea-9e48-0242ac110017
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:41:55.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xqgds" for this suite.
Jul  5 11:42:17.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:42:17.543: INFO: namespace: e2e-tests-secrets-xqgds, resource: bindings, ignored listing per whitelist
Jul  5 11:42:17.592: INFO: namespace e2e-tests-secrets-xqgds deletion completed in 22.114310465s

• [SLOW TEST:30.450 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:42:17.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-9424a10d-beb4-11ea-9e48-0242ac110017
STEP: Creating configMap with name cm-test-opt-upd-9424a176-beb4-11ea-9e48-0242ac110017
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9424a10d-beb4-11ea-9e48-0242ac110017
STEP: Updating configmap cm-test-opt-upd-9424a176-beb4-11ea-9e48-0242ac110017
STEP: Creating configMap with name cm-test-opt-create-9424a1ad-beb4-11ea-9e48-0242ac110017
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:43:30.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-zmgm8" for this suite.
Jul  5 11:43:54.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:43:54.228: INFO: namespace: e2e-tests-configmap-zmgm8, resource: bindings, ignored listing per whitelist
Jul  5 11:43:54.233: INFO: namespace e2e-tests-configmap-zmgm8 deletion completed in 24.096626991s

• [SLOW TEST:96.640 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:43:54.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  5 11:43:54.348: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdbc3885-beb4-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-hkxkg" to be "success or failure"
Jul  5 11:43:54.398: INFO: Pod "downwardapi-volume-cdbc3885-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 50.568524ms
Jul  5 11:43:56.402: INFO: Pod "downwardapi-volume-cdbc3885-beb4-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054400373s
Jul  5 11:43:58.406: INFO: Pod "downwardapi-volume-cdbc3885-beb4-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057898447s
STEP: Saw pod success
Jul  5 11:43:58.406: INFO: Pod "downwardapi-volume-cdbc3885-beb4-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:43:58.409: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-cdbc3885-beb4-11ea-9e48-0242ac110017 container client-container: 
STEP: delete the pod
Jul  5 11:43:58.425: INFO: Waiting for pod downwardapi-volume-cdbc3885-beb4-11ea-9e48-0242ac110017 to disappear
Jul  5 11:43:58.430: INFO: Pod downwardapi-volume-cdbc3885-beb4-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:43:58.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hkxkg" for this suite.
Jul  5 11:44:04.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:44:04.503: INFO: namespace: e2e-tests-projected-hkxkg, resource: bindings, ignored listing per whitelist
Jul  5 11:44:04.523: INFO: namespace e2e-tests-projected-hkxkg deletion completed in 6.089614319s

• [SLOW TEST:10.289 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:44:04.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0705 11:44:44.937458       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  5 11:44:44.937: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:44:44.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-zz76r" for this suite.
Jul  5 11:44:56.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:44:56.965: INFO: namespace: e2e-tests-gc-zz76r, resource: bindings, ignored listing per whitelist
Jul  5 11:44:57.088: INFO: namespace e2e-tests-gc-zz76r deletion completed in 12.147384702s

• [SLOW TEST:52.565 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:44:57.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-nhzph
Jul  5 11:45:01.368: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-nhzph
STEP: checking the pod's current state and verifying that restartCount is present
Jul  5 11:45:01.370: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:49:01.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-nhzph" for this suite.
Jul  5 11:49:07.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:49:07.829: INFO: namespace: e2e-tests-container-probe-nhzph, resource: bindings, ignored listing per whitelist
Jul  5 11:49:07.867: INFO: namespace e2e-tests-container-probe-nhzph deletion completed in 6.106279301s

• [SLOW TEST:250.779 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:49:07.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  5 11:49:07.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88ad0363-beb5-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-gv2qc" to be "success or failure"
Jul  5 11:49:07.988: INFO: Pod "downwardapi-volume-88ad0363-beb5-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486142ms
Jul  5 11:49:09.991: INFO: Pod "downwardapi-volume-88ad0363-beb5-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00622719s
Jul  5 11:49:11.995: INFO: Pod "downwardapi-volume-88ad0363-beb5-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010234782s
STEP: Saw pod success
Jul  5 11:49:11.996: INFO: Pod "downwardapi-volume-88ad0363-beb5-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:49:11.999: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-88ad0363-beb5-11ea-9e48-0242ac110017 container client-container: 
STEP: delete the pod
Jul  5 11:49:12.079: INFO: Waiting for pod downwardapi-volume-88ad0363-beb5-11ea-9e48-0242ac110017 to disappear
Jul  5 11:49:12.092: INFO: Pod downwardapi-volume-88ad0363-beb5-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:49:12.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gv2qc" for this suite.
Jul  5 11:49:18.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:49:18.194: INFO: namespace: e2e-tests-projected-gv2qc, resource: bindings, ignored listing per whitelist
Jul  5 11:49:18.215: INFO: namespace e2e-tests-projected-gv2qc deletion completed in 6.119068869s

• [SLOW TEST:10.348 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:49:18.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 11:49:18.392: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jul  5 11:49:18.397: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xjzdv/daemonsets","resourceVersion":"230529"},"items":null}

Jul  5 11:49:18.399: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xjzdv/pods","resourceVersion":"230529"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:49:18.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-xjzdv" for this suite.
Jul  5 11:49:24.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:49:24.480: INFO: namespace: e2e-tests-daemonsets-xjzdv, resource: bindings, ignored listing per whitelist
Jul  5 11:49:24.529: INFO: namespace e2e-tests-daemonsets-xjzdv deletion completed in 6.117502399s

S [SKIPPING] [6.314 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jul  5 11:49:18.392: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:49:24.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 11:49:48.663: INFO: Container started at 2020-07-05 11:49:27 +0000 UTC, pod became ready at 2020-07-05 11:49:47 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:49:48.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-6gxkh" for this suite.
Jul  5 11:50:10.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:50:10.784: INFO: namespace: e2e-tests-container-probe-6gxkh, resource: bindings, ignored listing per whitelist
Jul  5 11:50:10.786: INFO: namespace e2e-tests-container-probe-6gxkh deletion completed in 22.095000978s

• [SLOW TEST:46.256 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:50:10.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ae2bc6d7-beb5-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 11:50:10.918: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae324d6d-beb5-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-fpl4r" to be "success or failure"
Jul  5 11:50:10.920: INFO: Pod "pod-projected-configmaps-ae324d6d-beb5-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.797455ms
Jul  5 11:50:12.925: INFO: Pod "pod-projected-configmaps-ae324d6d-beb5-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007141172s
Jul  5 11:50:14.928: INFO: Pod "pod-projected-configmaps-ae324d6d-beb5-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010626156s
STEP: Saw pod success
Jul  5 11:50:14.928: INFO: Pod "pod-projected-configmaps-ae324d6d-beb5-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:50:14.930: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-ae324d6d-beb5-11ea-9e48-0242ac110017 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 11:50:14.944: INFO: Waiting for pod pod-projected-configmaps-ae324d6d-beb5-11ea-9e48-0242ac110017 to disappear
Jul  5 11:50:14.950: INFO: Pod pod-projected-configmaps-ae324d6d-beb5-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:50:14.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fpl4r" for this suite.
Jul  5 11:50:20.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:50:21.018: INFO: namespace: e2e-tests-projected-fpl4r, resource: bindings, ignored listing per whitelist
Jul  5 11:50:21.043: INFO: namespace e2e-tests-projected-fpl4r deletion completed in 6.090291516s

• [SLOW TEST:10.257 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:50:21.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jul  5 11:50:21.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul  5 11:50:23.537: INFO: stderr: ""
Jul  5 11:50:23.537: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:50:23.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5dtqj" for this suite.
Jul  5 11:50:29.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:50:29.587: INFO: namespace: e2e-tests-kubectl-5dtqj, resource: bindings, ignored listing per whitelist
Jul  5 11:50:29.638: INFO: namespace e2e-tests-kubectl-5dtqj deletion completed in 6.096042944s

• [SLOW TEST:8.595 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:50:29.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-92wgr
Jul  5 11:50:33.853: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-92wgr
STEP: checking the pod's current state and verifying that restartCount is present
Jul  5 11:50:33.856: INFO: Initial restart count of pod liveness-exec is 0
Jul  5 11:51:24.062: INFO: Restart count of pod e2e-tests-container-probe-92wgr/liveness-exec is now 1 (50.206075685s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:51:24.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-92wgr" for this suite.
Jul  5 11:51:30.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:51:30.243: INFO: namespace: e2e-tests-container-probe-92wgr, resource: bindings, ignored listing per whitelist
Jul  5 11:51:30.269: INFO: namespace e2e-tests-container-probe-92wgr deletion completed in 6.094160215s

• [SLOW TEST:60.631 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:51:30.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  5 11:51:30.378: INFO: Waiting up to 5m0s for pod "pod-dd8f0ca5-beb5-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-wxwnq" to be "success or failure"
Jul  5 11:51:30.690: INFO: Pod "pod-dd8f0ca5-beb5-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 312.166506ms
Jul  5 11:51:32.694: INFO: Pod "pod-dd8f0ca5-beb5-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316523529s
Jul  5 11:51:34.711: INFO: Pod "pod-dd8f0ca5-beb5-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33374867s
STEP: Saw pod success
Jul  5 11:51:34.712: INFO: Pod "pod-dd8f0ca5-beb5-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:51:34.714: INFO: Trying to get logs from node hunter-worker2 pod pod-dd8f0ca5-beb5-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 11:51:34.811: INFO: Waiting for pod pod-dd8f0ca5-beb5-11ea-9e48-0242ac110017 to disappear
Jul  5 11:51:35.444: INFO: Pod pod-dd8f0ca5-beb5-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:51:35.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wxwnq" for this suite.
Jul  5 11:51:41.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:51:41.709: INFO: namespace: e2e-tests-emptydir-wxwnq, resource: bindings, ignored listing per whitelist
Jul  5 11:51:41.731: INFO: namespace e2e-tests-emptydir-wxwnq deletion completed in 6.283323557s

• [SLOW TEST:11.462 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:51:41.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 11:51:42.228: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul  5 11:51:42.543: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul  5 11:51:47.548: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  5 11:51:49.559: INFO: Creating deployment "test-rolling-update-deployment"
Jul  5 11:51:49.564: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul  5 11:51:49.605: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul  5 11:51:51.614: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul  5 11:51:51.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729546709, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729546709, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729546709, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729546709, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 11:51:53.621: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  5 11:51:53.630: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-6vwng,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6vwng/deployments/test-rolling-update-deployment,UID:e8ff1351-beb5-11ea-a300-0242ac110004,ResourceVersion:230990,Generation:1,CreationTimestamp:2020-07-05 11:51:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-05 11:51:49 +0000 UTC 2020-07-05 11:51:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-05 11:51:52 +0000 UTC 2020-07-05 11:51:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul  5 11:51:53.633: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-6vwng,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6vwng/replicasets/test-rolling-update-deployment-75db98fb4c,UID:e906c0c2-beb5-11ea-a300-0242ac110004,ResourceVersion:230981,Generation:1,CreationTimestamp:2020-07-05 11:51:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e8ff1351-beb5-11ea-a300-0242ac110004 0xc002a3f997 0xc002a3f998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul  5 11:51:53.633: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul  5 11:51:53.633: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-6vwng,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6vwng/replicasets/test-rolling-update-controller,UID:e4a0733b-beb5-11ea-a300-0242ac110004,ResourceVersion:230989,Generation:2,CreationTimestamp:2020-07-05 11:51:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e8ff1351-beb5-11ea-a300-0242ac110004 0xc002a3f737 0xc002a3f738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 11:51:53.636: INFO: Pod "test-rolling-update-deployment-75db98fb4c-4nrxs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-4nrxs,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-6vwng,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6vwng/pods/test-rolling-update-deployment-75db98fb4c-4nrxs,UID:e908cc28-beb5-11ea-a300-0242ac110004,ResourceVersion:230980,Generation:0,CreationTimestamp:2020-07-05 11:51:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c e906c0c2-beb5-11ea-a300-0242ac110004 0xc002316547 0xc002316548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pwz55 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwz55,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-pwz55 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023165c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023165f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:51:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:51:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:51:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 11:51:49 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.135,StartTime:2020-07-05 11:51:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-05 11:51:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://16412d7dfc82e1169b20592fabc365d3a922cbea5c1629c634f5479ac4fa21a4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:51:53.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6vwng" for this suite.
Jul  5 11:52:01.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:52:01.709: INFO: namespace: e2e-tests-deployment-6vwng, resource: bindings, ignored listing per whitelist
Jul  5 11:52:01.754: INFO: namespace e2e-tests-deployment-6vwng deletion completed in 8.115280195s

• [SLOW TEST:20.023 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:52:01.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 11:52:01.911: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f057fa1b-beb5-11ea-a300-0242ac110004", Controller:(*bool)(0xc001597b12), BlockOwnerDeletion:(*bool)(0xc001597b13)}}
Jul  5 11:52:01.922: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f056cd54-beb5-11ea-a300-0242ac110004", Controller:(*bool)(0xc0015ee64a), BlockOwnerDeletion:(*bool)(0xc0015ee64b)}}
Jul  5 11:52:02.051: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f0574682-beb5-11ea-a300-0242ac110004", Controller:(*bool)(0xc0015ee7fa), BlockOwnerDeletion:(*bool)(0xc0015ee7fb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:52:07.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-7w2hp" for this suite.
Jul  5 11:52:13.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:52:13.171: INFO: namespace: e2e-tests-gc-7w2hp, resource: bindings, ignored listing per whitelist
Jul  5 11:52:13.178: INFO: namespace e2e-tests-gc-7w2hp deletion completed in 6.090719728s

• [SLOW TEST:11.423 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:52:13.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-f7707ccb-beb5-11ea-9e48-0242ac110017
STEP: Creating configMap with name cm-test-opt-upd-f7707d18-beb5-11ea-9e48-0242ac110017
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f7707ccb-beb5-11ea-9e48-0242ac110017
STEP: Updating configmap cm-test-opt-upd-f7707d18-beb5-11ea-9e48-0242ac110017
STEP: Creating configMap with name cm-test-opt-create-f7707d38-beb5-11ea-9e48-0242ac110017
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:53:47.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-stlct" for this suite.
Jul  5 11:54:09.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:54:09.147: INFO: namespace: e2e-tests-projected-stlct, resource: bindings, ignored listing per whitelist
Jul  5 11:54:09.181: INFO: namespace e2e-tests-projected-stlct deletion completed in 22.097518608s

• [SLOW TEST:116.003 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:54:09.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-3c4e0697-beb6-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 11:54:09.442: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c4f8a5f-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-configmap-gjprd" to be "success or failure"
Jul  5 11:54:09.449: INFO: Pod "pod-configmaps-3c4f8a5f-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.014877ms
Jul  5 11:54:11.528: INFO: Pod "pod-configmaps-3c4f8a5f-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086457465s
Jul  5 11:54:13.533: INFO: Pod "pod-configmaps-3c4f8a5f-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091021228s
STEP: Saw pod success
Jul  5 11:54:13.533: INFO: Pod "pod-configmaps-3c4f8a5f-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:54:13.536: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-3c4f8a5f-beb6-11ea-9e48-0242ac110017 container configmap-volume-test: 
STEP: delete the pod
Jul  5 11:54:13.587: INFO: Waiting for pod pod-configmaps-3c4f8a5f-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:54:13.600: INFO: Pod pod-configmaps-3c4f8a5f-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:54:13.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gjprd" for this suite.
Jul  5 11:54:19.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:54:19.658: INFO: namespace: e2e-tests-configmap-gjprd, resource: bindings, ignored listing per whitelist
Jul  5 11:54:19.766: INFO: namespace e2e-tests-configmap-gjprd deletion completed in 6.162287253s

• [SLOW TEST:10.584 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:54:19.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-42a72485-beb6-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume secrets
Jul  5 11:54:20.034: INFO: Waiting up to 5m0s for pod "pod-secrets-42ad53b6-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-secrets-nv2q8" to be "success or failure"
Jul  5 11:54:20.127: INFO: Pod "pod-secrets-42ad53b6-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 93.011201ms
Jul  5 11:54:22.131: INFO: Pod "pod-secrets-42ad53b6-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096819948s
Jul  5 11:54:24.135: INFO: Pod "pod-secrets-42ad53b6-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101377393s
Jul  5 11:54:26.140: INFO: Pod "pod-secrets-42ad53b6-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105820601s
STEP: Saw pod success
Jul  5 11:54:26.140: INFO: Pod "pod-secrets-42ad53b6-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:54:26.143: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-42ad53b6-beb6-11ea-9e48-0242ac110017 container secret-volume-test: 
STEP: delete the pod
Jul  5 11:54:26.254: INFO: Waiting for pod pod-secrets-42ad53b6-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:54:26.260: INFO: Pod pod-secrets-42ad53b6-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:54:26.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nv2q8" for this suite.
Jul  5 11:54:32.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:54:32.401: INFO: namespace: e2e-tests-secrets-nv2q8, resource: bindings, ignored listing per whitelist
Jul  5 11:54:32.670: INFO: namespace e2e-tests-secrets-nv2q8 deletion completed in 6.406889017s

• [SLOW TEST:12.904 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:54:32.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-4a72b5b1-beb6-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 11:54:33.085: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a734603-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-configmap-rvjcn" to be "success or failure"
Jul  5 11:54:33.119: INFO: Pod "pod-configmaps-4a734603-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 33.927165ms
Jul  5 11:54:35.123: INFO: Pod "pod-configmaps-4a734603-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038039647s
Jul  5 11:54:37.127: INFO: Pod "pod-configmaps-4a734603-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042720505s
STEP: Saw pod success
Jul  5 11:54:37.127: INFO: Pod "pod-configmaps-4a734603-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:54:37.130: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-4a734603-beb6-11ea-9e48-0242ac110017 container configmap-volume-test: 
STEP: delete the pod
Jul  5 11:54:37.169: INFO: Waiting for pod pod-configmaps-4a734603-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:54:37.184: INFO: Pod pod-configmaps-4a734603-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:54:37.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rvjcn" for this suite.
Jul  5 11:54:43.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:54:43.288: INFO: namespace: e2e-tests-configmap-rvjcn, resource: bindings, ignored listing per whitelist
Jul  5 11:54:43.290: INFO: namespace e2e-tests-configmap-rvjcn deletion completed in 6.102479674s

• [SLOW TEST:10.620 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:54:43.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tpqsn A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-tpqsn;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tpqsn A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-tpqsn;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tpqsn.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-tpqsn.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tpqsn.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tpqsn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-tpqsn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tpqsn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-tpqsn.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tpqsn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 172.23.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.23.172_udp@PTR;check="$$(dig +tcp +noall +answer +search 172.23.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.23.172_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tpqsn A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-tpqsn;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tpqsn A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tpqsn.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tpqsn.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tpqsn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-tpqsn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tpqsn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-tpqsn.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tpqsn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 172.23.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.23.172_udp@PTR;check="$$(dig +tcp +noall +answer +search 172.23.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.23.172_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  5 11:54:51.635: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.638: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.676: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.679: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.681: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.688: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.848: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.852: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.856: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.859: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:51.876: INFO: Lookups using e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tpqsn jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc]

Jul  5 11:54:56.881: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.884: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.921: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.924: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.926: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.929: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.932: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.935: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.938: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.940: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:54:56.954: INFO: Lookups using e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tpqsn jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc]

Jul  5 11:55:01.881: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.883: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.920: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.922: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.925: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.928: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.931: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.934: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.937: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.940: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:01.959: INFO: Lookups using e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tpqsn jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc]

Jul  5 11:55:06.961: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:06.965: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:07.004: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:07.007: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:07.009: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:07.012: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:07.019: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:07.023: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:07.026: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:07.028: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:07.040: INFO: Lookups using e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tpqsn jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc]

Jul  5 11:55:11.882: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:11.885: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:11.979: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:11.982: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:11.985: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:11.987: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:11.990: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:11.992: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:11.995: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:11.998: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:12.047: INFO: Lookups using e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tpqsn jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc]

Jul  5 11:55:16.881: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:16.886: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:17.031: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:17.033: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:17.035: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:17.038: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:17.041: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:17.043: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:17.046: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:17.049: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc from pod e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017: the server could not find the requested resource (get pods dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017)
Jul  5 11:55:17.068: INFO: Lookups using e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tpqsn jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn jessie_udp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@dns-test-service.e2e-tests-dns-tpqsn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tpqsn.svc]

Jul  5 11:55:21.966: INFO: DNS probes using e2e-tests-dns-tpqsn/dns-test-50a94bbd-beb6-11ea-9e48-0242ac110017 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:55:22.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-tpqsn" for this suite.
Jul  5 11:55:28.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:55:28.477: INFO: namespace: e2e-tests-dns-tpqsn, resource: bindings, ignored listing per whitelist
Jul  5 11:55:28.481: INFO: namespace e2e-tests-dns-tpqsn deletion completed in 6.111143414s

• [SLOW TEST:45.191 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:55:28.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-6b8cc186-beb6-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 11:55:28.620: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b8e99b0-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-9q9zw" to be "success or failure"
Jul  5 11:55:28.740: INFO: Pod "pod-projected-configmaps-6b8e99b0-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 120.413514ms
Jul  5 11:55:30.744: INFO: Pod "pod-projected-configmaps-6b8e99b0-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124389494s
Jul  5 11:55:32.748: INFO: Pod "pod-projected-configmaps-6b8e99b0-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128393221s
STEP: Saw pod success
Jul  5 11:55:32.748: INFO: Pod "pod-projected-configmaps-6b8e99b0-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:55:32.750: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6b8e99b0-beb6-11ea-9e48-0242ac110017 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 11:55:32.910: INFO: Waiting for pod pod-projected-configmaps-6b8e99b0-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:55:32.947: INFO: Pod pod-projected-configmaps-6b8e99b0-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:55:32.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9q9zw" for this suite.
Jul  5 11:55:38.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:55:39.020: INFO: namespace: e2e-tests-projected-9q9zw, resource: bindings, ignored listing per whitelist
Jul  5 11:55:39.252: INFO: namespace e2e-tests-projected-9q9zw deletion completed in 6.300730249s

• [SLOW TEST:10.770 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:55:39.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-720c6eec-beb6-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 11:55:39.551: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7210cde0-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-rkpqb" to be "success or failure"
Jul  5 11:55:39.776: INFO: Pod "pod-projected-configmaps-7210cde0-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 225.682605ms
Jul  5 11:55:41.787: INFO: Pod "pod-projected-configmaps-7210cde0-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235983842s
Jul  5 11:55:43.791: INFO: Pod "pod-projected-configmaps-7210cde0-beb6-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.240269126s
Jul  5 11:55:45.795: INFO: Pod "pod-projected-configmaps-7210cde0-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24440528s
STEP: Saw pod success
Jul  5 11:55:45.795: INFO: Pod "pod-projected-configmaps-7210cde0-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:55:45.798: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7210cde0-beb6-11ea-9e48-0242ac110017 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 11:55:45.831: INFO: Waiting for pod pod-projected-configmaps-7210cde0-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:55:45.836: INFO: Pod pod-projected-configmaps-7210cde0-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:55:45.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rkpqb" for this suite.
Jul  5 11:55:51.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:55:51.959: INFO: namespace: e2e-tests-projected-rkpqb, resource: bindings, ignored listing per whitelist
Jul  5 11:55:52.021: INFO: namespace e2e-tests-projected-rkpqb deletion completed in 6.183082376s

• [SLOW TEST:12.769 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:55:52.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul  5 11:55:52.132: INFO: Waiting up to 5m0s for pod "downward-api-799160d9-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-pql8t" to be "success or failure"
Jul  5 11:55:52.164: INFO: Pod "downward-api-799160d9-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 31.815371ms
Jul  5 11:55:54.204: INFO: Pod "downward-api-799160d9-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071732707s
Jul  5 11:55:56.208: INFO: Pod "downward-api-799160d9-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075734219s
STEP: Saw pod success
Jul  5 11:55:56.208: INFO: Pod "downward-api-799160d9-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:55:56.230: INFO: Trying to get logs from node hunter-worker2 pod downward-api-799160d9-beb6-11ea-9e48-0242ac110017 container dapi-container: 
STEP: delete the pod
Jul  5 11:55:56.263: INFO: Waiting for pod downward-api-799160d9-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:55:56.276: INFO: Pod downward-api-799160d9-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:55:56.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pql8t" for this suite.
Jul  5 11:56:02.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:56:02.316: INFO: namespace: e2e-tests-downward-api-pql8t, resource: bindings, ignored listing per whitelist
Jul  5 11:56:02.364: INFO: namespace e2e-tests-downward-api-pql8t deletion completed in 6.084645515s

• [SLOW TEST:10.343 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:56:02.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-7fbb9fdb-beb6-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume secrets
Jul  5 11:56:02.476: INFO: Waiting up to 5m0s for pod "pod-secrets-7fbd9fd4-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-secrets-w2w6b" to be "success or failure"
Jul  5 11:56:02.486: INFO: Pod "pod-secrets-7fbd9fd4-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.413452ms
Jul  5 11:56:04.490: INFO: Pod "pod-secrets-7fbd9fd4-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013813698s
Jul  5 11:56:06.492: INFO: Pod "pod-secrets-7fbd9fd4-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016611536s
STEP: Saw pod success
Jul  5 11:56:06.492: INFO: Pod "pod-secrets-7fbd9fd4-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:56:06.495: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7fbd9fd4-beb6-11ea-9e48-0242ac110017 container secret-volume-test: 
STEP: delete the pod
Jul  5 11:56:06.586: INFO: Waiting for pod pod-secrets-7fbd9fd4-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:56:06.634: INFO: Pod pod-secrets-7fbd9fd4-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:56:06.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-w2w6b" for this suite.
Jul  5 11:56:12.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:56:12.831: INFO: namespace: e2e-tests-secrets-w2w6b, resource: bindings, ignored listing per whitelist
Jul  5 11:56:12.833: INFO: namespace e2e-tests-secrets-w2w6b deletion completed in 6.196474084s

• [SLOW TEST:10.469 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:56:12.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 11:56:17.337: INFO: Waiting up to 5m0s for pod "client-envvars-8892a7b8-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-pods-5t46f" to be "success or failure"
Jul  5 11:56:17.445: INFO: Pod "client-envvars-8892a7b8-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 107.936187ms
Jul  5 11:56:19.542: INFO: Pod "client-envvars-8892a7b8-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204619963s
Jul  5 11:56:21.614: INFO: Pod "client-envvars-8892a7b8-beb6-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.277269939s
Jul  5 11:56:23.619: INFO: Pod "client-envvars-8892a7b8-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.281415635s
STEP: Saw pod success
Jul  5 11:56:23.619: INFO: Pod "client-envvars-8892a7b8-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:56:23.622: INFO: Trying to get logs from node hunter-worker pod client-envvars-8892a7b8-beb6-11ea-9e48-0242ac110017 container env3cont: 
STEP: delete the pod
Jul  5 11:56:23.656: INFO: Waiting for pod client-envvars-8892a7b8-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:56:23.669: INFO: Pod client-envvars-8892a7b8-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:56:23.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5t46f" for this suite.
Jul  5 11:57:03.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:57:03.746: INFO: namespace: e2e-tests-pods-5t46f, resource: bindings, ignored listing per whitelist
Jul  5 11:57:03.772: INFO: namespace e2e-tests-pods-5t46f deletion completed in 40.098946674s

• [SLOW TEST:50.939 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:57:03.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  5 11:57:03.875: INFO: Waiting up to 5m0s for pod "pod-a454fd2e-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-4sns7" to be "success or failure"
Jul  5 11:57:03.879: INFO: Pod "pod-a454fd2e-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07355ms
Jul  5 11:57:05.883: INFO: Pod "pod-a454fd2e-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007844174s
Jul  5 11:57:07.887: INFO: Pod "pod-a454fd2e-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011450889s
STEP: Saw pod success
Jul  5 11:57:07.887: INFO: Pod "pod-a454fd2e-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:57:07.943: INFO: Trying to get logs from node hunter-worker2 pod pod-a454fd2e-beb6-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 11:57:08.286: INFO: Waiting for pod pod-a454fd2e-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:57:08.652: INFO: Pod pod-a454fd2e-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:57:08.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4sns7" for this suite.
Jul  5 11:57:14.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:57:14.910: INFO: namespace: e2e-tests-emptydir-4sns7, resource: bindings, ignored listing per whitelist
Jul  5 11:57:14.944: INFO: namespace e2e-tests-emptydir-4sns7 deletion completed in 6.286322059s

• [SLOW TEST:11.172 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:57:14.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:57:19.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-7qt99" for this suite.
Jul  5 11:58:05.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:58:05.118: INFO: namespace: e2e-tests-kubelet-test-7qt99, resource: bindings, ignored listing per whitelist
Jul  5 11:58:05.184: INFO: namespace e2e-tests-kubelet-test-7qt99 deletion completed in 46.088946153s

• [SLOW TEST:50.240 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:58:05.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  5 11:58:05.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-4swj5'
Jul  5 11:58:05.390: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  5 11:58:05.390: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jul  5 11:58:09.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-4swj5'
Jul  5 11:58:09.536: INFO: stderr: ""
Jul  5 11:58:09.537: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:58:09.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4swj5" for this suite.
Jul  5 11:58:31.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:58:31.563: INFO: namespace: e2e-tests-kubectl-4swj5, resource: bindings, ignored listing per whitelist
Jul  5 11:58:31.632: INFO: namespace e2e-tests-kubectl-4swj5 deletion completed in 22.092351102s

• [SLOW TEST:26.448 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:58:31.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
Jul  5 11:58:43.226: INFO: 5 pods remaining
Jul  5 11:58:43.226: INFO: 5 pods has nil DeletionTimestamp
Jul  5 11:58:43.226: INFO: 
STEP: Gathering metrics
W0705 11:58:47.718430       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  5 11:58:47.718: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:58:47.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-98bg4" for this suite.
Jul  5 11:58:56.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:58:56.287: INFO: namespace: e2e-tests-gc-98bg4, resource: bindings, ignored listing per whitelist
Jul  5 11:58:56.290: INFO: namespace e2e-tests-gc-98bg4 deletion completed in 8.567620935s

• [SLOW TEST:24.657 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:58:56.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-e792dee0-beb6-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 11:58:56.863: INFO: Waiting up to 5m0s for pod "pod-configmaps-e79895dc-beb6-11ea-9e48-0242ac110017" in namespace "e2e-tests-configmap-ldgk8" to be "success or failure"
Jul  5 11:58:56.876: INFO: Pod "pod-configmaps-e79895dc-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.222957ms
Jul  5 11:58:58.881: INFO: Pod "pod-configmaps-e79895dc-beb6-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017825228s
Jul  5 11:59:00.884: INFO: Pod "pod-configmaps-e79895dc-beb6-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021379932s
STEP: Saw pod success
Jul  5 11:59:00.884: INFO: Pod "pod-configmaps-e79895dc-beb6-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 11:59:00.887: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e79895dc-beb6-11ea-9e48-0242ac110017 container configmap-volume-test: 
STEP: delete the pod
Jul  5 11:59:01.001: INFO: Waiting for pod pod-configmaps-e79895dc-beb6-11ea-9e48-0242ac110017 to disappear
Jul  5 11:59:01.007: INFO: Pod pod-configmaps-e79895dc-beb6-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 11:59:01.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ldgk8" for this suite.
Jul  5 11:59:07.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 11:59:07.100: INFO: namespace: e2e-tests-configmap-ldgk8, resource: bindings, ignored listing per whitelist
Jul  5 11:59:07.127: INFO: namespace e2e-tests-configmap-ldgk8 deletion completed in 6.116534917s

• [SLOW TEST:10.837 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 11:59:07.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul  5 11:59:07.285: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-a,UID:ede34f35-beb6-11ea-a300-0242ac110004,ResourceVersion:232514,Generation:0,CreationTimestamp:2020-07-05 11:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  5 11:59:07.285: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-a,UID:ede34f35-beb6-11ea-a300-0242ac110004,ResourceVersion:232514,Generation:0,CreationTimestamp:2020-07-05 11:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul  5 11:59:17.294: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-a,UID:ede34f35-beb6-11ea-a300-0242ac110004,ResourceVersion:232534,Generation:0,CreationTimestamp:2020-07-05 11:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul  5 11:59:17.294: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-a,UID:ede34f35-beb6-11ea-a300-0242ac110004,ResourceVersion:232534,Generation:0,CreationTimestamp:2020-07-05 11:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul  5 11:59:27.302: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-a,UID:ede34f35-beb6-11ea-a300-0242ac110004,ResourceVersion:232554,Generation:0,CreationTimestamp:2020-07-05 11:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  5 11:59:27.302: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-a,UID:ede34f35-beb6-11ea-a300-0242ac110004,ResourceVersion:232554,Generation:0,CreationTimestamp:2020-07-05 11:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul  5 11:59:37.309: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-a,UID:ede34f35-beb6-11ea-a300-0242ac110004,ResourceVersion:232574,Generation:0,CreationTimestamp:2020-07-05 11:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  5 11:59:37.309: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-a,UID:ede34f35-beb6-11ea-a300-0242ac110004,ResourceVersion:232574,Generation:0,CreationTimestamp:2020-07-05 11:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul  5 11:59:47.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-b,UID:05c20338-beb7-11ea-a300-0242ac110004,ResourceVersion:232594,Generation:0,CreationTimestamp:2020-07-05 11:59:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  5 11:59:47.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-b,UID:05c20338-beb7-11ea-a300-0242ac110004,ResourceVersion:232594,Generation:0,CreationTimestamp:2020-07-05 11:59:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul  5 11:59:57.325: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-b,UID:05c20338-beb7-11ea-a300-0242ac110004,ResourceVersion:232614,Generation:0,CreationTimestamp:2020-07-05 11:59:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  5 11:59:57.325: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-48nvx,SelfLink:/api/v1/namespaces/e2e-tests-watch-48nvx/configmaps/e2e-watch-test-configmap-b,UID:05c20338-beb7-11ea-a300-0242ac110004,ResourceVersion:232614,Generation:0,CreationTimestamp:2020-07-05 11:59:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:00:07.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-48nvx" for this suite.
Jul  5 12:00:13.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:00:13.397: INFO: namespace: e2e-tests-watch-48nvx, resource: bindings, ignored listing per whitelist
Jul  5 12:00:13.430: INFO: namespace e2e-tests-watch-48nvx deletion completed in 6.100753173s

• [SLOW TEST:66.303 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:00:13.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-15dfed73-beb7-11ea-9e48-0242ac110017
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-15dfed73-beb7-11ea-9e48-0242ac110017
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:01:34.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pwg7n" for this suite.
Jul  5 12:01:57.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:01:57.034: INFO: namespace: e2e-tests-projected-pwg7n, resource: bindings, ignored listing per whitelist
Jul  5 12:01:57.074: INFO: namespace e2e-tests-projected-pwg7n deletion completed in 22.086335872s

• [SLOW TEST:103.643 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:01:57.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul  5 12:01:57.868: INFO: Pod name wrapped-volume-race-538af5a5-beb7-11ea-9e48-0242ac110017: Found 0 pods out of 5
Jul  5 12:02:02.875: INFO: Pod name wrapped-volume-race-538af5a5-beb7-11ea-9e48-0242ac110017: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-538af5a5-beb7-11ea-9e48-0242ac110017 in namespace e2e-tests-emptydir-wrapper-xtctk, will wait for the garbage collector to delete the pods
Jul  5 12:04:14.960: INFO: Deleting ReplicationController wrapped-volume-race-538af5a5-beb7-11ea-9e48-0242ac110017 took: 6.638906ms
Jul  5 12:04:15.161: INFO: Terminating ReplicationController wrapped-volume-race-538af5a5-beb7-11ea-9e48-0242ac110017 pods took: 200.601574ms
STEP: Creating RC which spawns configmap-volume pods
Jul  5 12:04:53.999: INFO: Pod name wrapped-volume-race-bc89243f-beb7-11ea-9e48-0242ac110017: Found 0 pods out of 5
Jul  5 12:04:59.008: INFO: Pod name wrapped-volume-race-bc89243f-beb7-11ea-9e48-0242ac110017: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bc89243f-beb7-11ea-9e48-0242ac110017 in namespace e2e-tests-emptydir-wrapper-xtctk, will wait for the garbage collector to delete the pods
Jul  5 12:06:51.098: INFO: Deleting ReplicationController wrapped-volume-race-bc89243f-beb7-11ea-9e48-0242ac110017 took: 7.462304ms
Jul  5 12:06:51.198: INFO: Terminating ReplicationController wrapped-volume-race-bc89243f-beb7-11ea-9e48-0242ac110017 pods took: 100.227456ms
STEP: Creating RC which spawns configmap-volume pods
Jul  5 12:07:34.839: INFO: Pod name wrapped-volume-race-1c66edff-beb8-11ea-9e48-0242ac110017: Found 0 pods out of 5
Jul  5 12:07:39.847: INFO: Pod name wrapped-volume-race-1c66edff-beb8-11ea-9e48-0242ac110017: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1c66edff-beb8-11ea-9e48-0242ac110017 in namespace e2e-tests-emptydir-wrapper-xtctk, will wait for the garbage collector to delete the pods
Jul  5 12:10:13.970: INFO: Deleting ReplicationController wrapped-volume-race-1c66edff-beb8-11ea-9e48-0242ac110017 took: 7.26343ms
Jul  5 12:10:14.070: INFO: Terminating ReplicationController wrapped-volume-race-1c66edff-beb8-11ea-9e48-0242ac110017 pods took: 100.287479ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:10:55.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-xtctk" for this suite.
Jul  5 12:11:07.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:11:07.123: INFO: namespace: e2e-tests-emptydir-wrapper-xtctk, resource: bindings, ignored listing per whitelist
Jul  5 12:11:07.125: INFO: namespace e2e-tests-emptydir-wrapper-xtctk deletion completed in 12.108049463s

• [SLOW TEST:550.051 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:11:07.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul  5 12:11:13.805: INFO: Successfully updated pod "labelsupdate9b014e45-beb8-11ea-9e48-0242ac110017"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:11:16.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jmpb4" for this suite.
Jul  5 12:11:40.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:11:40.246: INFO: namespace: e2e-tests-projected-jmpb4, resource: bindings, ignored listing per whitelist
Jul  5 12:11:40.275: INFO: namespace e2e-tests-projected-jmpb4 deletion completed in 24.119552867s

• [SLOW TEST:33.149 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:11:40.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jul  5 12:11:40.397: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix220593150/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:11:40.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9kr4f" for this suite.
Jul  5 12:11:46.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:11:46.572: INFO: namespace: e2e-tests-kubectl-9kr4f, resource: bindings, ignored listing per whitelist
Jul  5 12:11:46.572: INFO: namespace e2e-tests-kubectl-9kr4f deletion completed in 6.096125683s

• [SLOW TEST:6.297 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:11:46.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul  5 12:11:52.945: INFO: Successfully updated pod "annotationupdateb2b0ea8a-beb8-11ea-9e48-0242ac110017"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:11:55.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wqdqp" for this suite.
Jul  5 12:12:17.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:12:17.184: INFO: namespace: e2e-tests-downward-api-wqdqp, resource: bindings, ignored listing per whitelist
Jul  5 12:12:17.255: INFO: namespace e2e-tests-downward-api-wqdqp deletion completed in 22.14259924s

• [SLOW TEST:30.683 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:12:17.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 12:12:17.421: INFO: Creating deployment "nginx-deployment"
Jul  5 12:12:17.425: INFO: Waiting for observed generation 1
Jul  5 12:12:21.655: INFO: Waiting for all required pods to come up
Jul  5 12:12:21.839: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul  5 12:12:32.025: INFO: Waiting for deployment "nginx-deployment" to complete
Jul  5 12:12:32.030: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jul  5 12:12:32.035: INFO: Updating deployment nginx-deployment
Jul  5 12:12:32.035: INFO: Waiting for observed generation 2
Jul  5 12:12:34.074: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul  5 12:12:34.077: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul  5 12:12:34.079: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul  5 12:12:34.085: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul  5 12:12:34.085: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul  5 12:12:34.087: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul  5 12:12:34.092: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jul  5 12:12:34.092: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jul  5 12:12:34.096: INFO: Updating deployment nginx-deployment
Jul  5 12:12:34.096: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jul  5 12:12:34.581: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul  5 12:12:34.832: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  5 12:12:38.590: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-7p72t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7p72t/deployments/nginx-deployment,UID:c4dbf0f9-beb8-11ea-a300-0242ac110004,ResourceVersion:234808,Generation:3,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-07-05 12:12:34 +0000 UTC 2020-07-05 12:12:34 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-05 12:12:35 +0000 UTC 2020-07-05 12:12:17 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jul  5 12:12:39.258: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-7p72t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7p72t/replicasets/nginx-deployment-5c98f8fb5,UID:cd91f051-beb8-11ea-a300-0242ac110004,ResourceVersion:234793,Generation:3,CreationTimestamp:2020-07-05 12:12:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c4dbf0f9-beb8-11ea-a300-0242ac110004 0xc0019783f7 0xc0019783f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 12:12:39.258: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jul  5 12:12:39.258: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-7p72t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7p72t/replicasets/nginx-deployment-85ddf47c5d,UID:c4e1bb32-beb8-11ea-a300-0242ac110004,ResourceVersion:234804,Generation:3,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c4dbf0f9-beb8-11ea-a300-0242ac110004 0xc0019784b7 0xc0019784b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jul  5 12:12:39.777: INFO: Pod "nginx-deployment-5c98f8fb5-27dxq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-27dxq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-27dxq,UID:cd93c030-beb8-11ea-a300-0242ac110004,ResourceVersion:234703,Generation:0,CreationTimestamp:2020-07-05 12:12:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bf177 0xc0025bf178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bf1f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bf280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-05 12:12:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.777: INFO: Pod "nginx-deployment-5c98f8fb5-77vj5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-77vj5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-77vj5,UID:cef94160-beb8-11ea-a300-0242ac110004,ResourceVersion:234802,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bf340 0xc0025bf341}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bf3c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bf3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-05 12:12:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.777: INFO: Pod "nginx-deployment-5c98f8fb5-9nn89" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9nn89,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-9nn89,UID:cf169690-beb8-11ea-a300-0242ac110004,ResourceVersion:234835,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bf4b0 0xc0025bf4b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bf530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bf550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-05 12:12:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.778: INFO: Pod "nginx-deployment-5c98f8fb5-cp2mx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cp2mx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-cp2mx,UID:cf3d498f-beb8-11ea-a300-0242ac110004,ResourceVersion:234790,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bf610 0xc0025bf611}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bf690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bf6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.778: INFO: Pod "nginx-deployment-5c98f8fb5-dv8nj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dv8nj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-dv8nj,UID:cdad335f-beb8-11ea-a300-0242ac110004,ResourceVersion:234730,Generation:0,CreationTimestamp:2020-07-05 12:12:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bf720 0xc0025bf721}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bf7a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bf7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-05 12:12:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.778: INFO: Pod "nginx-deployment-5c98f8fb5-k96q2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k96q2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-k96q2,UID:cf1743cb-beb8-11ea-a300-0242ac110004,ResourceVersion:234839,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bf880 0xc0025bf881}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bf900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bf920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-05 12:12:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.778: INFO: Pod "nginx-deployment-5c98f8fb5-kbjvn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kbjvn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-kbjvn,UID:cf389fbc-beb8-11ea-a300-0242ac110004,ResourceVersion:234782,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bf9e0 0xc0025bf9e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bfa70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bfa90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.778: INFO: Pod "nginx-deployment-5c98f8fb5-q85zv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-q85zv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-q85zv,UID:cf38b245-beb8-11ea-a300-0242ac110004,ResourceVersion:234780,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bfb00 0xc0025bfb01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bfb80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bfba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.778: INFO: Pod "nginx-deployment-5c98f8fb5-rg7jl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rg7jl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-rg7jl,UID:cd92b6f0-beb8-11ea-a300-0242ac110004,ResourceVersion:234699,Generation:0,CreationTimestamp:2020-07-05 12:12:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bfc10 0xc0025bfc11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bfcf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bfd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-05 12:12:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.778: INFO: Pod "nginx-deployment-5c98f8fb5-rvn5c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rvn5c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-rvn5c,UID:cdba48fd-beb8-11ea-a300-0242ac110004,ResourceVersion:234728,Generation:0,CreationTimestamp:2020-07-05 12:12:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bfdf0 0xc0025bfdf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025bfe70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025bfe90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-05 12:12:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.779: INFO: Pod "nginx-deployment-5c98f8fb5-tjppf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tjppf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-tjppf,UID:cf38b328-beb8-11ea-a300-0242ac110004,ResourceVersion:234779,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0025bfff0 0xc0025bfff1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aa070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aa090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.779: INFO: Pod "nginx-deployment-5c98f8fb5-z6888" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z6888,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-z6888,UID:cd93c415-beb8-11ea-a300-0242ac110004,ResourceVersion:234707,Generation:0,CreationTimestamp:2020-07-05 12:12:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0020aa100 0xc0020aa101}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aa1f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aa210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:32 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-05 12:12:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.779: INFO: Pod "nginx-deployment-5c98f8fb5-z9mm2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z9mm2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-5c98f8fb5-z9mm2,UID:cf38b8ca-beb8-11ea-a300-0242ac110004,ResourceVersion:234777,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 cd91f051-beb8-11ea-a300-0242ac110004 0xc0020aa540 0xc0020aa541}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aa5c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aa5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.779: INFO: Pod "nginx-deployment-85ddf47c5d-2qcnz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2qcnz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-2qcnz,UID:cf387314-beb8-11ea-a300-0242ac110004,ResourceVersion:234785,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0020aa650 0xc0020aa651}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aa6c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aaf60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.779: INFO: Pod "nginx-deployment-85ddf47c5d-7qrnv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7qrnv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-7qrnv,UID:c4fc2b11-beb8-11ea-a300-0242ac110004,ResourceVersion:234654,Generation:0,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0020aafd0 0xc0020aafd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020ab050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020ab270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.2.224,StartTime:2020-07-05 12:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 12:12:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9bbd39afef16a6450a84e125b8beda429636ea415c06191f1ada64ff3e519f17}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.779: INFO: Pod "nginx-deployment-85ddf47c5d-8kbbb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8kbbb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-8kbbb,UID:cf3871c5-beb8-11ea-a300-0242ac110004,ResourceVersion:234783,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0020ab550 0xc0020ab551}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020ab8d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020ab8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.780: INFO: Pod "nginx-deployment-85ddf47c5d-9tk7d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9tk7d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-9tk7d,UID:cf168a73-beb8-11ea-a300-0242ac110004,ResourceVersion:234817,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0020ab960 0xc0020ab961}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aba10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aba30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-05 12:12:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.780: INFO: Pod "nginx-deployment-85ddf47c5d-bfhzv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bfhzv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-bfhzv,UID:cf16875e-beb8-11ea-a300-0242ac110004,ResourceVersion:234813,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0020abb20 0xc0020abb21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020abbc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020abbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-05 12:12:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.780: INFO: Pod "nginx-deployment-85ddf47c5d-fp8l9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fp8l9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-fp8l9,UID:cf385f45-beb8-11ea-a300-0242ac110004,ResourceVersion:234776,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0020abf00 0xc0020abf01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020abfa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020abfc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.780: INFO: Pod "nginx-deployment-85ddf47c5d-g5jrw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g5jrw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-g5jrw,UID:c4fc31d0-beb8-11ea-a300-0242ac110004,ResourceVersion:234656,Generation:0,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b0030 0xc0027b0031}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b00a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b00c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.155,StartTime:2020-07-05 12:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 12:12:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1ffd17a53938fb482cce0325a8c9724a67ca58bc30384d0f65e302c405c20143}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.780: INFO: Pod "nginx-deployment-85ddf47c5d-jv8q7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jv8q7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-jv8q7,UID:cf1690ea-beb8-11ea-a300-0242ac110004,ResourceVersion:234825,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b02d0 0xc0027b02d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b0340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b0360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-05 12:12:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.780: INFO: Pod "nginx-deployment-85ddf47c5d-kvddg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kvddg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-kvddg,UID:c4f2ef34-beb8-11ea-a300-0242ac110004,ResourceVersion:234660,Generation:0,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b0460 0xc0027b0461}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b04e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b0500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.2.223,StartTime:2020-07-05 12:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 12:12:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a632570fcd0ea950ddb1cf006851d352b555e7f792f3c4c1b07af5555c093f53}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.780: INFO: Pod "nginx-deployment-85ddf47c5d-mxl4t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mxl4t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-mxl4t,UID:cf386a5a-beb8-11ea-a300-0242ac110004,ResourceVersion:234784,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b05c0 0xc0027b05c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b0670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b0690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.780: INFO: Pod "nginx-deployment-85ddf47c5d-prh8h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-prh8h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-prh8h,UID:cf384b8c-beb8-11ea-a300-0242ac110004,ResourceVersion:234778,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b0700 0xc0027b0701}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b0770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b0790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.781: INFO: Pod "nginx-deployment-85ddf47c5d-qdx44" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qdx44,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-qdx44,UID:c4f1af78-beb8-11ea-a300-0242ac110004,ResourceVersion:234621,Generation:0,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b0800 0xc0027b0801}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b08e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b0900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.2.220,StartTime:2020-07-05 12:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 12:12:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://67947000e6626dabbcee983b3e5295bfac8cd07d68d6bb3ed1cfec13f4888d52}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.781: INFO: Pod "nginx-deployment-85ddf47c5d-qhqvg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qhqvg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-qhqvg,UID:cf169467-beb8-11ea-a300-0242ac110004,ResourceVersion:234827,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b09c0 0xc0027b09c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b0ae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b0b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-05 12:12:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.781: INFO: Pod "nginx-deployment-85ddf47c5d-qrcsx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qrcsx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-qrcsx,UID:cef94746-beb8-11ea-a300-0242ac110004,ResourceVersion:234809,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b0bb0 0xc0027b0bb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b0c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b0cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-05 12:12:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.781: INFO: Pod "nginx-deployment-85ddf47c5d-qtg5n" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qtg5n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-qtg5n,UID:c4f5df19-beb8-11ea-a300-0242ac110004,ResourceVersion:234648,Generation:0,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b0d60 0xc0027b0d61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b0dd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b0e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.152,StartTime:2020-07-05 12:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 12:12:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7e6b6277e2065bd5e5650c373e0dc702172515671a419982bdf5849e9a3c5bd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.781: INFO: Pod "nginx-deployment-85ddf47c5d-r2qbz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r2qbz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-r2qbz,UID:c4fc1e27-beb8-11ea-a300-0242ac110004,ResourceVersion:234646,Generation:0,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b0f30 0xc0027b0f31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b0fa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b0fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.153,StartTime:2020-07-05 12:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 12:12:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://16731b9d741cdcae80e21a510e1bd3ad2bdc79c3d326b93e5d1b33894a9dd2bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.781: INFO: Pod "nginx-deployment-85ddf47c5d-tk8lx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tk8lx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-tk8lx,UID:cef4745c-beb8-11ea-a300-0242ac110004,ResourceVersion:234789,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b11c0 0xc0027b11c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b1230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b1250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-05 12:12:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.782: INFO: Pod "nginx-deployment-85ddf47c5d-wnn78" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wnn78,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-wnn78,UID:c4f5f26a-beb8-11ea-a300-0242ac110004,ResourceVersion:234667,Generation:0,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b1300 0xc0027b1301}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b1370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b1390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.154,StartTime:2020-07-05 12:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 12:12:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9b0604ee9f2965125ba8b9c31bed127b331da34cd983a81ee7cafebd8d6c3796}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.782: INFO: Pod "nginx-deployment-85ddf47c5d-wxntw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wxntw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-wxntw,UID:c4f5f8e2-beb8-11ea-a300-0242ac110004,ResourceVersion:234661,Generation:0,CreationTimestamp:2020-07-05 12:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b1450 0xc0027b1451}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b14c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b1620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:17 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.2.222,StartTime:2020-07-05 12:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 12:12:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4fbd0ee5bce007fe8b21e3be3feed9c01f0a147d502dfa512b12524a85e108cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 12:12:39.782: INFO: Pod "nginx-deployment-85ddf47c5d-z4sdl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z4sdl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7p72t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7p72t/pods/nginx-deployment-85ddf47c5d-z4sdl,UID:cef90f5a-beb8-11ea-a300-0242ac110004,ResourceVersion:234798,Generation:0,CreationTimestamp:2020-07-05 12:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c4e1bb32-beb8-11ea-a300-0242ac110004 0xc0027b16e0 0xc0027b16e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dd5sq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dd5sq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dd5sq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b1750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b1770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:12:34 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-05 12:12:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:12:39.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-7p72t" for this suite.
Jul  5 12:13:24.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:13:24.142: INFO: namespace: e2e-tests-deployment-7p72t, resource: bindings, ignored listing per whitelist
Jul  5 12:13:24.429: INFO: namespace e2e-tests-deployment-7p72t deletion completed in 44.562219765s

• [SLOW TEST:67.174 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:13:24.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  5 12:13:26.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed681d41-beb8-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-lgbz5" to be "success or failure"
Jul  5 12:13:26.700: INFO: Pod "downwardapi-volume-ed681d41-beb8-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 617.64141ms
Jul  5 12:13:29.252: INFO: Pod "downwardapi-volume-ed681d41-beb8-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.169744845s
Jul  5 12:13:31.255: INFO: Pod "downwardapi-volume-ed681d41-beb8-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.17336991s
Jul  5 12:13:33.260: INFO: Pod "downwardapi-volume-ed681d41-beb8-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.177519396s
STEP: Saw pod success
Jul  5 12:13:33.260: INFO: Pod "downwardapi-volume-ed681d41-beb8-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:13:33.262: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ed681d41-beb8-11ea-9e48-0242ac110017 container client-container: 
STEP: delete the pod
Jul  5 12:13:33.354: INFO: Waiting for pod downwardapi-volume-ed681d41-beb8-11ea-9e48-0242ac110017 to disappear
Jul  5 12:13:33.364: INFO: Pod downwardapi-volume-ed681d41-beb8-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:13:33.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lgbz5" for this suite.
Jul  5 12:13:41.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:13:41.436: INFO: namespace: e2e-tests-projected-lgbz5, resource: bindings, ignored listing per whitelist
Jul  5 12:13:41.487: INFO: namespace e2e-tests-projected-lgbz5 deletion completed in 8.092751727s

• [SLOW TEST:17.057 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:13:41.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-n2hxq
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  5 12:13:41.926: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  5 12:14:14.807: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.238:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-n2hxq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 12:14:14.808: INFO: >>> kubeConfig: /root/.kube/config
I0705 12:14:14.843501       6 log.go:172] (0xc0009e7ce0) (0xc0019c2780) Create stream
I0705 12:14:14.843533       6 log.go:172] (0xc0009e7ce0) (0xc0019c2780) Stream added, broadcasting: 1
I0705 12:14:14.848403       6 log.go:172] (0xc0009e7ce0) Reply frame received for 1
I0705 12:14:14.848458       6 log.go:172] (0xc0009e7ce0) (0xc0029e4c80) Create stream
I0705 12:14:14.848472       6 log.go:172] (0xc0009e7ce0) (0xc0029e4c80) Stream added, broadcasting: 3
I0705 12:14:14.849686       6 log.go:172] (0xc0009e7ce0) Reply frame received for 3
I0705 12:14:14.849732       6 log.go:172] (0xc0009e7ce0) (0xc0019c2820) Create stream
I0705 12:14:14.849752       6 log.go:172] (0xc0009e7ce0) (0xc0019c2820) Stream added, broadcasting: 5
I0705 12:14:14.850603       6 log.go:172] (0xc0009e7ce0) Reply frame received for 5
I0705 12:14:14.908389       6 log.go:172] (0xc0009e7ce0) Data frame received for 3
I0705 12:14:14.908443       6 log.go:172] (0xc0029e4c80) (3) Data frame handling
I0705 12:14:14.908495       6 log.go:172] (0xc0029e4c80) (3) Data frame sent
I0705 12:14:14.908564       6 log.go:172] (0xc0009e7ce0) Data frame received for 5
I0705 12:14:14.908597       6 log.go:172] (0xc0019c2820) (5) Data frame handling
I0705 12:14:14.908740       6 log.go:172] (0xc0009e7ce0) Data frame received for 3
I0705 12:14:14.908762       6 log.go:172] (0xc0029e4c80) (3) Data frame handling
I0705 12:14:14.910627       6 log.go:172] (0xc0009e7ce0) Data frame received for 1
I0705 12:14:14.910665       6 log.go:172] (0xc0019c2780) (1) Data frame handling
I0705 12:14:14.910701       6 log.go:172] (0xc0019c2780) (1) Data frame sent
I0705 12:14:14.910719       6 log.go:172] (0xc0009e7ce0) (0xc0019c2780) Stream removed, broadcasting: 1
I0705 12:14:14.910737       6 log.go:172] (0xc0009e7ce0) Go away received
I0705 12:14:14.910893       6 log.go:172] (0xc0009e7ce0) (0xc0019c2780) Stream removed, broadcasting: 1
I0705 12:14:14.910929       6 log.go:172] (0xc0009e7ce0) (0xc0029e4c80) Stream removed, broadcasting: 3
I0705 12:14:14.911002       6 log.go:172] (0xc0009e7ce0) (0xc0019c2820) Stream removed, broadcasting: 5
Jul  5 12:14:14.911: INFO: Found all expected endpoints: [netserver-0]
Jul  5 12:14:14.914: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.169:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-n2hxq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 12:14:14.914: INFO: >>> kubeConfig: /root/.kube/config
I0705 12:14:14.945867       6 log.go:172] (0xc0009b7550) (0xc000399360) Create stream
I0705 12:14:14.945895       6 log.go:172] (0xc0009b7550) (0xc000399360) Stream added, broadcasting: 1
I0705 12:14:14.947658       6 log.go:172] (0xc0009b7550) Reply frame received for 1
I0705 12:14:14.947690       6 log.go:172] (0xc0009b7550) (0xc001fec1e0) Create stream
I0705 12:14:14.947699       6 log.go:172] (0xc0009b7550) (0xc001fec1e0) Stream added, broadcasting: 3
I0705 12:14:14.948638       6 log.go:172] (0xc0009b7550) Reply frame received for 3
I0705 12:14:14.948689       6 log.go:172] (0xc0009b7550) (0xc000399400) Create stream
I0705 12:14:14.948705       6 log.go:172] (0xc0009b7550) (0xc000399400) Stream added, broadcasting: 5
I0705 12:14:14.949726       6 log.go:172] (0xc0009b7550) Reply frame received for 5
I0705 12:14:15.029951       6 log.go:172] (0xc0009b7550) Data frame received for 3
I0705 12:14:15.029984       6 log.go:172] (0xc001fec1e0) (3) Data frame handling
I0705 12:14:15.030002       6 log.go:172] (0xc001fec1e0) (3) Data frame sent
I0705 12:14:15.030014       6 log.go:172] (0xc0009b7550) Data frame received for 3
I0705 12:14:15.030024       6 log.go:172] (0xc001fec1e0) (3) Data frame handling
I0705 12:14:15.030113       6 log.go:172] (0xc0009b7550) Data frame received for 5
I0705 12:14:15.030142       6 log.go:172] (0xc000399400) (5) Data frame handling
I0705 12:14:15.031801       6 log.go:172] (0xc0009b7550) Data frame received for 1
I0705 12:14:15.031830       6 log.go:172] (0xc000399360) (1) Data frame handling
I0705 12:14:15.031863       6 log.go:172] (0xc000399360) (1) Data frame sent
I0705 12:14:15.031885       6 log.go:172] (0xc0009b7550) (0xc000399360) Stream removed, broadcasting: 1
I0705 12:14:15.031908       6 log.go:172] (0xc0009b7550) Go away received
I0705 12:14:15.032054       6 log.go:172] (0xc0009b7550) (0xc000399360) Stream removed, broadcasting: 1
I0705 12:14:15.032083       6 log.go:172] (0xc0009b7550) (0xc001fec1e0) Stream removed, broadcasting: 3
I0705 12:14:15.032102       6 log.go:172] (0xc0009b7550) (0xc000399400) Stream removed, broadcasting: 5
Jul  5 12:14:15.032: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:14:15.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-n2hxq" for this suite.
Jul  5 12:14:37.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:14:37.180: INFO: namespace: e2e-tests-pod-network-test-n2hxq, resource: bindings, ignored listing per whitelist
Jul  5 12:14:37.187: INFO: namespace e2e-tests-pod-network-test-n2hxq deletion completed in 22.150818782s

• [SLOW TEST:55.700 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:14:37.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul  5 12:14:37.425: INFO: Waiting up to 5m0s for pod "downward-api-1842f55f-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-4ggzx" to be "success or failure"
Jul  5 12:14:37.474: INFO: Pod "downward-api-1842f55f-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 48.844219ms
Jul  5 12:14:39.565: INFO: Pod "downward-api-1842f55f-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13992418s
Jul  5 12:14:41.594: INFO: Pod "downward-api-1842f55f-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168556673s
Jul  5 12:14:43.598: INFO: Pod "downward-api-1842f55f-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172626595s
STEP: Saw pod success
Jul  5 12:14:43.598: INFO: Pod "downward-api-1842f55f-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:14:43.601: INFO: Trying to get logs from node hunter-worker2 pod downward-api-1842f55f-beb9-11ea-9e48-0242ac110017 container dapi-container: 
STEP: delete the pod
Jul  5 12:14:43.619: INFO: Waiting for pod downward-api-1842f55f-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:14:43.624: INFO: Pod downward-api-1842f55f-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:14:43.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4ggzx" for this suite.
Jul  5 12:14:51.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:14:51.677: INFO: namespace: e2e-tests-downward-api-4ggzx, resource: bindings, ignored listing per whitelist
Jul  5 12:14:51.707: INFO: namespace e2e-tests-downward-api-4ggzx deletion completed in 8.079518012s

• [SLOW TEST:14.520 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:14:51.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-gfsdm in namespace e2e-tests-proxy-rjngx
I0705 12:14:51.919345       6 runners.go:184] Created replication controller with name: proxy-service-gfsdm, namespace: e2e-tests-proxy-rjngx, replica count: 1
I0705 12:14:52.969896       6 runners.go:184] proxy-service-gfsdm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 12:14:53.970111       6 runners.go:184] proxy-service-gfsdm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 12:14:54.970327       6 runners.go:184] proxy-service-gfsdm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0705 12:14:55.970566       6 runners.go:184] proxy-service-gfsdm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  5 12:14:56.026: INFO: setup took 4.226553553s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul  5 12:14:56.108: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-rjngx/pods/proxy-service-gfsdm-n6d6r:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  5 12:15:14.506: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2ba786fd-beb9-11ea-9e48-0242ac110017"
Jul  5 12:15:14.506: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2ba786fd-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-pods-xrzf6" to be "terminated due to deadline exceeded"
Jul  5 12:15:14.564: INFO: Pod "pod-update-activedeadlineseconds-2ba786fd-beb9-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 57.145336ms
Jul  5 12:15:16.570: INFO: Pod "pod-update-activedeadlineseconds-2ba786fd-beb9-11ea-9e48-0242ac110017": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.063405312s
Jul  5 12:15:16.570: INFO: Pod "pod-update-activedeadlineseconds-2ba786fd-beb9-11ea-9e48-0242ac110017" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:15:16.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xrzf6" for this suite.
Jul  5 12:15:22.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:15:22.691: INFO: namespace: e2e-tests-pods-xrzf6, resource: bindings, ignored listing per whitelist
Jul  5 12:15:22.724: INFO: namespace e2e-tests-pods-xrzf6 deletion completed in 6.149987986s

• [SLOW TEST:12.937 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:15:22.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  5 12:15:23.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-7pc4j'
Jul  5 12:15:27.426: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  5 12:15:27.426: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jul  5 12:15:27.478: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-rtpw6]
Jul  5 12:15:27.478: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-rtpw6" in namespace "e2e-tests-kubectl-7pc4j" to be "running and ready"
Jul  5 12:15:27.480: INFO: Pod "e2e-test-nginx-rc-rtpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.60075ms
Jul  5 12:15:29.664: INFO: Pod "e2e-test-nginx-rc-rtpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186159463s
Jul  5 12:15:31.668: INFO: Pod "e2e-test-nginx-rc-rtpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190728314s
Jul  5 12:15:33.672: INFO: Pod "e2e-test-nginx-rc-rtpw6": Phase="Running", Reason="", readiness=true. Elapsed: 6.19404475s
Jul  5 12:15:33.672: INFO: Pod "e2e-test-nginx-rc-rtpw6" satisfied condition "running and ready"
Jul  5 12:15:33.672: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-rtpw6]
Jul  5 12:15:33.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7pc4j'
Jul  5 12:15:33.859: INFO: stderr: ""
Jul  5 12:15:33.859: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jul  5 12:15:33.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7pc4j'
Jul  5 12:15:33.971: INFO: stderr: ""
Jul  5 12:15:33.972: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:15:33.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7pc4j" for this suite.
Jul  5 12:15:56.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:15:56.107: INFO: namespace: e2e-tests-kubectl-7pc4j, resource: bindings, ignored listing per whitelist
Jul  5 12:15:56.136: INFO: namespace e2e-tests-kubectl-7pc4j deletion completed in 22.1603668s

• [SLOW TEST:33.411 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:15:56.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul  5 12:16:03.801: INFO: 0 pods remaining
Jul  5 12:16:03.801: INFO: 0 pods has nil DeletionTimestamp
Jul  5 12:16:03.801: INFO: 
Jul  5 12:16:05.103: INFO: 0 pods remaining
Jul  5 12:16:05.103: INFO: 0 pods has nil DeletionTimestamp
Jul  5 12:16:05.103: INFO: 
STEP: Gathering metrics
W0705 12:16:05.729658       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  5 12:16:05.729: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:16:05.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-2hg2r" for this suite.
Jul  5 12:16:12.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:16:12.133: INFO: namespace: e2e-tests-gc-2hg2r, resource: bindings, ignored listing per whitelist
Jul  5 12:16:12.181: INFO: namespace e2e-tests-gc-2hg2r deletion completed in 6.449275395s

• [SLOW TEST:16.045 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:16:12.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jul  5 12:16:12.378: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-qb8vs" to be "success or failure"
Jul  5 12:16:12.510: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 132.187909ms
Jul  5 12:16:14.986: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608109107s
Jul  5 12:16:17.281: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.90295269s
Jul  5 12:16:19.285: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.906557032s
Jul  5 12:16:21.289: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.910674494s
Jul  5 12:16:23.595: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.217063528s
Jul  5 12:16:25.787: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.408938435s
STEP: Saw pod success
Jul  5 12:16:25.787: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jul  5 12:16:25.831: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul  5 12:16:26.039: INFO: Waiting for pod pod-host-path-test to disappear
Jul  5 12:16:26.070: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:16:26.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-qb8vs" for this suite.
Jul  5 12:16:40.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:16:40.185: INFO: namespace: e2e-tests-hostpath-qb8vs, resource: bindings, ignored listing per whitelist
Jul  5 12:16:40.190: INFO: namespace e2e-tests-hostpath-qb8vs deletion completed in 14.115299811s

• [SLOW TEST:28.009 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:16:40.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-61f484c4-beb9-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume secrets
Jul  5 12:16:41.182: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-kfzp4" to be "success or failure"
Jul  5 12:16:41.215: INFO: Pod "pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 32.779318ms
Jul  5 12:16:43.421: INFO: Pod "pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239115999s
Jul  5 12:16:45.547: INFO: Pod "pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364748935s
Jul  5 12:16:47.654: INFO: Pod "pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.47219981s
Jul  5 12:16:50.152: INFO: Pod "pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 8.96951462s
Jul  5 12:16:52.272: INFO: Pod "pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 11.0898201s
Jul  5 12:16:54.395: INFO: Pod "pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.213355022s
STEP: Saw pod success
Jul  5 12:16:54.396: INFO: Pod "pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:16:54.399: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017 container secret-volume-test: 
STEP: delete the pod
Jul  5 12:16:55.240: INFO: Waiting for pod pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:16:55.300: INFO: Pod pod-projected-secrets-6201a04b-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:16:55.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kfzp4" for this suite.
Jul  5 12:17:01.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:17:01.464: INFO: namespace: e2e-tests-projected-kfzp4, resource: bindings, ignored listing per whitelist
Jul  5 12:17:01.503: INFO: namespace e2e-tests-projected-kfzp4 deletion completed in 6.195583144s

• [SLOW TEST:21.313 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:17:01.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-fd64
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 12:17:01.703: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fd64" in namespace "e2e-tests-subpath-nf4cc" to be "success or failure"
Jul  5 12:17:01.729: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Pending", Reason="", readiness=false. Elapsed: 25.954169ms
Jul  5 12:17:05.720: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017481695s
Jul  5 12:17:07.725: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021668242s
Jul  5 12:17:09.729: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026003734s
Jul  5 12:17:12.042: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Pending", Reason="", readiness=false. Elapsed: 10.339430387s
Jul  5 12:17:14.164: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Pending", Reason="", readiness=false. Elapsed: 12.460858572s
Jul  5 12:17:16.167: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Pending", Reason="", readiness=false. Elapsed: 14.464267236s
Jul  5 12:17:18.171: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Pending", Reason="", readiness=false. Elapsed: 16.46791061s
Jul  5 12:17:20.175: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Running", Reason="", readiness=false. Elapsed: 18.471643145s
Jul  5 12:17:22.179: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Running", Reason="", readiness=false. Elapsed: 20.475716274s
Jul  5 12:17:24.184: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Running", Reason="", readiness=false. Elapsed: 22.480966088s
Jul  5 12:17:26.188: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Running", Reason="", readiness=false. Elapsed: 24.485213066s
Jul  5 12:17:28.194: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Running", Reason="", readiness=false. Elapsed: 26.490649207s
Jul  5 12:17:30.197: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Running", Reason="", readiness=false. Elapsed: 28.494225303s
Jul  5 12:17:32.200: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Running", Reason="", readiness=false. Elapsed: 30.496752831s
Jul  5 12:17:34.202: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Running", Reason="", readiness=false. Elapsed: 32.499510158s
Jul  5 12:17:36.210: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Running", Reason="", readiness=false. Elapsed: 34.50751909s
Jul  5 12:17:38.213: INFO: Pod "pod-subpath-test-projected-fd64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.510386292s
STEP: Saw pod success
Jul  5 12:17:38.213: INFO: Pod "pod-subpath-test-projected-fd64" satisfied condition "success or failure"
Jul  5 12:17:38.215: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-fd64 container test-container-subpath-projected-fd64: 
STEP: delete the pod
Jul  5 12:17:38.265: INFO: Waiting for pod pod-subpath-test-projected-fd64 to disappear
Jul  5 12:17:38.362: INFO: Pod pod-subpath-test-projected-fd64 no longer exists
STEP: Deleting pod pod-subpath-test-projected-fd64
Jul  5 12:17:38.362: INFO: Deleting pod "pod-subpath-test-projected-fd64" in namespace "e2e-tests-subpath-nf4cc"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:17:38.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-nf4cc" for this suite.
Jul  5 12:17:44.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:17:44.507: INFO: namespace: e2e-tests-subpath-nf4cc, resource: bindings, ignored listing per whitelist
Jul  5 12:17:44.537: INFO: namespace e2e-tests-subpath-nf4cc deletion completed in 6.170077309s

• [SLOW TEST:43.035 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:17:44.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-cpsdg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpsdg to expose endpoints map[]
Jul  5 12:17:44.766: INFO: Get endpoints failed (2.82163ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul  5 12:17:45.769: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpsdg exposes endpoints map[] (1.006335829s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-cpsdg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpsdg to expose endpoints map[pod1:[100]]
Jul  5 12:17:49.927: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpsdg exposes endpoints map[pod1:[100]] (4.152530263s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-cpsdg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpsdg to expose endpoints map[pod1:[100] pod2:[101]]
Jul  5 12:17:54.882: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpsdg exposes endpoints map[pod1:[100] pod2:[101]] (4.950771362s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-cpsdg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpsdg to expose endpoints map[pod2:[101]]
Jul  5 12:17:56.007: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpsdg exposes endpoints map[pod2:[101]] (1.119737283s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-cpsdg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpsdg to expose endpoints map[]
Jul  5 12:17:56.022: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpsdg exposes endpoints map[] (9.600946ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:17:56.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-cpsdg" for this suite.
Jul  5 12:18:02.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:18:02.145: INFO: namespace: e2e-tests-services-cpsdg, resource: bindings, ignored listing per whitelist
Jul  5 12:18:02.169: INFO: namespace e2e-tests-services-cpsdg deletion completed in 6.126073911s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:17.632 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:18:02.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-92646ecc-beb9-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume secrets
Jul  5 12:18:02.260: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9264f824-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-lfd48" to be "success or failure"
Jul  5 12:18:02.274: INFO: Pod "pod-projected-secrets-9264f824-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.895989ms
Jul  5 12:18:04.568: INFO: Pod "pod-projected-secrets-9264f824-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307161727s
Jul  5 12:18:06.572: INFO: Pod "pod-projected-secrets-9264f824-beb9-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.31146223s
Jul  5 12:18:08.577: INFO: Pod "pod-projected-secrets-9264f824-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.316538156s
STEP: Saw pod success
Jul  5 12:18:08.577: INFO: Pod "pod-projected-secrets-9264f824-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:18:08.580: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-9264f824-beb9-11ea-9e48-0242ac110017 container projected-secret-volume-test: 
STEP: delete the pod
Jul  5 12:18:08.668: INFO: Waiting for pod pod-projected-secrets-9264f824-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:18:08.678: INFO: Pod pod-projected-secrets-9264f824-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:18:08.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lfd48" for this suite.
Jul  5 12:18:14.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:18:14.728: INFO: namespace: e2e-tests-projected-lfd48, resource: bindings, ignored listing per whitelist
Jul  5 12:18:14.784: INFO: namespace e2e-tests-projected-lfd48 deletion completed in 6.101636999s

• [SLOW TEST:12.615 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:18:14.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9a1bef0e-beb9-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume secrets
Jul  5 12:18:15.231: INFO: Waiting up to 5m0s for pod "pod-secrets-9a202236-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-secrets-znhj4" to be "success or failure"
Jul  5 12:18:15.235: INFO: Pod "pod-secrets-9a202236-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.441722ms
Jul  5 12:18:17.238: INFO: Pod "pod-secrets-9a202236-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006143685s
Jul  5 12:18:19.241: INFO: Pod "pod-secrets-9a202236-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009837273s
STEP: Saw pod success
Jul  5 12:18:19.241: INFO: Pod "pod-secrets-9a202236-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:18:19.245: INFO: Trying to get logs from node hunter-worker pod pod-secrets-9a202236-beb9-11ea-9e48-0242ac110017 container secret-env-test: 
STEP: delete the pod
Jul  5 12:18:19.284: INFO: Waiting for pod pod-secrets-9a202236-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:18:19.289: INFO: Pod pod-secrets-9a202236-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:18:19.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-znhj4" for this suite.
Jul  5 12:18:25.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:18:25.356: INFO: namespace: e2e-tests-secrets-znhj4, resource: bindings, ignored listing per whitelist
Jul  5 12:18:25.429: INFO: namespace e2e-tests-secrets-znhj4 deletion completed in 6.108579639s

• [SLOW TEST:10.644 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:18:25.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-a048936b-beb9-11ea-9e48-0242ac110017
STEP: Creating secret with name secret-projected-all-test-volume-a0489343-beb9-11ea-9e48-0242ac110017
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul  5 12:18:25.572: INFO: Waiting up to 5m0s for pod "projected-volume-a04892cb-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-58vsw" to be "success or failure"
Jul  5 12:18:25.593: INFO: Pod "projected-volume-a04892cb-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.571448ms
Jul  5 12:18:27.643: INFO: Pod "projected-volume-a04892cb-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071027051s
Jul  5 12:18:29.646: INFO: Pod "projected-volume-a04892cb-beb9-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.074407514s
Jul  5 12:18:31.650: INFO: Pod "projected-volume-a04892cb-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078366118s
STEP: Saw pod success
Jul  5 12:18:31.650: INFO: Pod "projected-volume-a04892cb-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:18:31.653: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-a04892cb-beb9-11ea-9e48-0242ac110017 container projected-all-volume-test: 
STEP: delete the pod
Jul  5 12:18:31.687: INFO: Waiting for pod projected-volume-a04892cb-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:18:31.721: INFO: Pod projected-volume-a04892cb-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:18:31.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-58vsw" for this suite.
Jul  5 12:18:37.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:18:37.814: INFO: namespace: e2e-tests-projected-58vsw, resource: bindings, ignored listing per whitelist
Jul  5 12:18:37.818: INFO: namespace e2e-tests-projected-58vsw deletion completed in 6.092866764s

• [SLOW TEST:12.389 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:18:37.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 12:18:37.895: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:18:39.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-npgqh" for this suite.
Jul  5 12:18:45.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:18:45.390: INFO: namespace: e2e-tests-custom-resource-definition-npgqh, resource: bindings, ignored listing per whitelist
Jul  5 12:18:45.401: INFO: namespace e2e-tests-custom-resource-definition-npgqh deletion completed in 6.09024349s

• [SLOW TEST:7.583 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:18:45.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul  5 12:18:45.531: INFO: Waiting up to 5m0s for pod "pod-ac2fb93d-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-9k794" to be "success or failure"
Jul  5 12:18:45.535: INFO: Pod "pod-ac2fb93d-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.5803ms
Jul  5 12:18:47.728: INFO: Pod "pod-ac2fb93d-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197475781s
Jul  5 12:18:49.746: INFO: Pod "pod-ac2fb93d-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.215552253s
STEP: Saw pod success
Jul  5 12:18:49.746: INFO: Pod "pod-ac2fb93d-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:18:49.750: INFO: Trying to get logs from node hunter-worker pod pod-ac2fb93d-beb9-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 12:18:49.819: INFO: Waiting for pod pod-ac2fb93d-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:18:49.829: INFO: Pod pod-ac2fb93d-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:18:49.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9k794" for this suite.
Jul  5 12:18:57.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:18:57.458: INFO: namespace: e2e-tests-emptydir-9k794, resource: bindings, ignored listing per whitelist
Jul  5 12:18:57.506: INFO: namespace e2e-tests-emptydir-9k794 deletion completed in 7.671220943s

• [SLOW TEST:12.104 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:18:57.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jul  5 12:18:57.686: INFO: Waiting up to 5m0s for pod "client-containers-b3638cf7-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-containers-xk4p9" to be "success or failure"
Jul  5 12:18:57.689: INFO: Pod "client-containers-b3638cf7-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.277275ms
Jul  5 12:18:59.692: INFO: Pod "client-containers-b3638cf7-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00597414s
Jul  5 12:19:01.695: INFO: Pod "client-containers-b3638cf7-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00912026s
STEP: Saw pod success
Jul  5 12:19:01.695: INFO: Pod "client-containers-b3638cf7-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:19:01.698: INFO: Trying to get logs from node hunter-worker2 pod client-containers-b3638cf7-beb9-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 12:19:02.052: INFO: Waiting for pod client-containers-b3638cf7-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:19:02.062: INFO: Pod client-containers-b3638cf7-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:19:02.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-xk4p9" for this suite.
Jul  5 12:19:10.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:19:10.100: INFO: namespace: e2e-tests-containers-xk4p9, resource: bindings, ignored listing per whitelist
Jul  5 12:19:10.160: INFO: namespace e2e-tests-containers-xk4p9 deletion completed in 8.093341747s

• [SLOW TEST:12.654 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:19:10.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jul  5 12:19:10.281: INFO: Waiting up to 5m0s for pod "client-containers-baed4ff2-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-containers-k62bt" to be "success or failure"
Jul  5 12:19:10.314: INFO: Pod "client-containers-baed4ff2-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 33.627181ms
Jul  5 12:19:12.318: INFO: Pod "client-containers-baed4ff2-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037338086s
Jul  5 12:19:14.323: INFO: Pod "client-containers-baed4ff2-beb9-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.042192086s
Jul  5 12:19:16.327: INFO: Pod "client-containers-baed4ff2-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046840106s
STEP: Saw pod success
Jul  5 12:19:16.327: INFO: Pod "client-containers-baed4ff2-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:19:16.331: INFO: Trying to get logs from node hunter-worker pod client-containers-baed4ff2-beb9-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 12:19:16.358: INFO: Waiting for pod client-containers-baed4ff2-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:19:16.416: INFO: Pod client-containers-baed4ff2-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:19:16.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-k62bt" for this suite.
Jul  5 12:19:22.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:19:22.456: INFO: namespace: e2e-tests-containers-k62bt, resource: bindings, ignored listing per whitelist
Jul  5 12:19:22.510: INFO: namespace e2e-tests-containers-k62bt deletion completed in 6.089702363s

• [SLOW TEST:12.350 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:19:22.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jul  5 12:19:22.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:22.901: INFO: stderr: ""
Jul  5 12:19:22.901: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  5 12:19:22.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:23.043: INFO: stderr: ""
Jul  5 12:19:23.043: INFO: stdout: "update-demo-nautilus-6qwvx update-demo-nautilus-7qsll "
Jul  5 12:19:23.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qwvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:23.171: INFO: stderr: ""
Jul  5 12:19:23.172: INFO: stdout: ""
Jul  5 12:19:23.172: INFO: update-demo-nautilus-6qwvx is created but not running
Jul  5 12:19:28.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:28.270: INFO: stderr: ""
Jul  5 12:19:28.270: INFO: stdout: "update-demo-nautilus-6qwvx update-demo-nautilus-7qsll "
Jul  5 12:19:28.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qwvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:28.365: INFO: stderr: ""
Jul  5 12:19:28.365: INFO: stdout: "true"
Jul  5 12:19:28.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qwvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:28.460: INFO: stderr: ""
Jul  5 12:19:28.460: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 12:19:28.460: INFO: validating pod update-demo-nautilus-6qwvx
Jul  5 12:19:28.464: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 12:19:28.464: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 12:19:28.464: INFO: update-demo-nautilus-6qwvx is verified up and running
Jul  5 12:19:28.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qsll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:28.564: INFO: stderr: ""
Jul  5 12:19:28.564: INFO: stdout: "true"
Jul  5 12:19:28.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qsll -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:28.662: INFO: stderr: ""
Jul  5 12:19:28.662: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 12:19:28.662: INFO: validating pod update-demo-nautilus-7qsll
Jul  5 12:19:28.665: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 12:19:28.665: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 12:19:28.666: INFO: update-demo-nautilus-7qsll is verified up and running
STEP: scaling down the replication controller
Jul  5 12:19:28.667: INFO: scanned /root for discovery docs: 
Jul  5 12:19:28.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:29.828: INFO: stderr: ""
Jul  5 12:19:29.828: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  5 12:19:29.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:29.930: INFO: stderr: ""
Jul  5 12:19:29.930: INFO: stdout: "update-demo-nautilus-6qwvx update-demo-nautilus-7qsll "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  5 12:19:34.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:35.015: INFO: stderr: ""
Jul  5 12:19:35.015: INFO: stdout: "update-demo-nautilus-6qwvx update-demo-nautilus-7qsll "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  5 12:19:40.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:40.148: INFO: stderr: ""
Jul  5 12:19:40.148: INFO: stdout: "update-demo-nautilus-6qwvx update-demo-nautilus-7qsll "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  5 12:19:45.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:45.251: INFO: stderr: ""
Jul  5 12:19:45.251: INFO: stdout: "update-demo-nautilus-6qwvx "
Jul  5 12:19:45.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qwvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:45.351: INFO: stderr: ""
Jul  5 12:19:45.351: INFO: stdout: "true"
Jul  5 12:19:45.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qwvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:45.456: INFO: stderr: ""
Jul  5 12:19:45.457: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 12:19:45.457: INFO: validating pod update-demo-nautilus-6qwvx
Jul  5 12:19:45.460: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 12:19:45.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 12:19:45.460: INFO: update-demo-nautilus-6qwvx is verified up and running
STEP: scaling up the replication controller
Jul  5 12:19:45.462: INFO: scanned /root for discovery docs: 
Jul  5 12:19:45.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:46.609: INFO: stderr: ""
Jul  5 12:19:46.609: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  5 12:19:46.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:46.709: INFO: stderr: ""
Jul  5 12:19:46.709: INFO: stdout: "update-demo-nautilus-6qwvx update-demo-nautilus-rbt9s "
Jul  5 12:19:46.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qwvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:46.809: INFO: stderr: ""
Jul  5 12:19:46.809: INFO: stdout: "true"
Jul  5 12:19:46.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qwvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:46.906: INFO: stderr: ""
Jul  5 12:19:46.907: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 12:19:46.907: INFO: validating pod update-demo-nautilus-6qwvx
Jul  5 12:19:46.915: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 12:19:46.915: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 12:19:46.915: INFO: update-demo-nautilus-6qwvx is verified up and running
Jul  5 12:19:46.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbt9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:47.012: INFO: stderr: ""
Jul  5 12:19:47.012: INFO: stdout: ""
Jul  5 12:19:47.012: INFO: update-demo-nautilus-rbt9s is created but not running
Jul  5 12:19:52.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:52.111: INFO: stderr: ""
Jul  5 12:19:52.111: INFO: stdout: "update-demo-nautilus-6qwvx update-demo-nautilus-rbt9s "
Jul  5 12:19:52.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qwvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:52.211: INFO: stderr: ""
Jul  5 12:19:52.211: INFO: stdout: "true"
Jul  5 12:19:52.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qwvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:52.318: INFO: stderr: ""
Jul  5 12:19:52.318: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 12:19:52.318: INFO: validating pod update-demo-nautilus-6qwvx
Jul  5 12:19:52.322: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 12:19:52.322: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 12:19:52.322: INFO: update-demo-nautilus-6qwvx is verified up and running
Jul  5 12:19:52.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbt9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:52.434: INFO: stderr: ""
Jul  5 12:19:52.434: INFO: stdout: "true"
Jul  5 12:19:52.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbt9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:52.523: INFO: stderr: ""
Jul  5 12:19:52.523: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 12:19:52.523: INFO: validating pod update-demo-nautilus-rbt9s
Jul  5 12:19:52.527: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 12:19:52.527: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 12:19:52.527: INFO: update-demo-nautilus-rbt9s is verified up and running
STEP: using delete to clean up resources
Jul  5 12:19:52.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:52.630: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 12:19:52.630: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  5 12:19:52.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-mzfsd'
Jul  5 12:19:52.735: INFO: stderr: "No resources found.\n"
Jul  5 12:19:52.735: INFO: stdout: ""
Jul  5 12:19:52.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-mzfsd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  5 12:19:52.860: INFO: stderr: ""
Jul  5 12:19:52.860: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:19:52.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mzfsd" for this suite.
Jul  5 12:20:15.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:20:15.167: INFO: namespace: e2e-tests-kubectl-mzfsd, resource: bindings, ignored listing per whitelist
Jul  5 12:20:15.209: INFO: namespace e2e-tests-kubectl-mzfsd deletion completed in 22.345036849s

• [SLOW TEST:52.700 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:20:15.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jul  5 12:20:15.348: INFO: Waiting up to 5m0s for pod "client-containers-e1b87e35-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-containers-8ssx8" to be "success or failure"
Jul  5 12:20:15.352: INFO: Pod "client-containers-e1b87e35-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.895153ms
Jul  5 12:20:17.763: INFO: Pod "client-containers-e1b87e35-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.414931s
Jul  5 12:20:19.767: INFO: Pod "client-containers-e1b87e35-beb9-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.419134412s
Jul  5 12:20:21.771: INFO: Pod "client-containers-e1b87e35-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.422879159s
STEP: Saw pod success
Jul  5 12:20:21.771: INFO: Pod "client-containers-e1b87e35-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:20:21.773: INFO: Trying to get logs from node hunter-worker pod client-containers-e1b87e35-beb9-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 12:20:21.791: INFO: Waiting for pod client-containers-e1b87e35-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:20:21.795: INFO: Pod client-containers-e1b87e35-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:20:21.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-8ssx8" for this suite.
Jul  5 12:20:27.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:20:27.862: INFO: namespace: e2e-tests-containers-8ssx8, resource: bindings, ignored listing per whitelist
Jul  5 12:20:27.920: INFO: namespace e2e-tests-containers-8ssx8 deletion completed in 6.095434667s

• [SLOW TEST:12.710 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:20:27.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e94758e4-beb9-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 12:20:28.081: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e94f9e1c-beb9-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-g2z88" to be "success or failure"
Jul  5 12:20:28.117: INFO: Pod "pod-projected-configmaps-e94f9e1c-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 35.727018ms
Jul  5 12:20:30.122: INFO: Pod "pod-projected-configmaps-e94f9e1c-beb9-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040233712s
Jul  5 12:20:32.125: INFO: Pod "pod-projected-configmaps-e94f9e1c-beb9-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043736718s
STEP: Saw pod success
Jul  5 12:20:32.125: INFO: Pod "pod-projected-configmaps-e94f9e1c-beb9-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:20:32.128: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e94f9e1c-beb9-11ea-9e48-0242ac110017 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 12:20:32.183: INFO: Waiting for pod pod-projected-configmaps-e94f9e1c-beb9-11ea-9e48-0242ac110017 to disappear
Jul  5 12:20:32.190: INFO: Pod pod-projected-configmaps-e94f9e1c-beb9-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:20:32.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g2z88" for this suite.
Jul  5 12:20:38.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:20:38.235: INFO: namespace: e2e-tests-projected-g2z88, resource: bindings, ignored listing per whitelist
Jul  5 12:20:38.286: INFO: namespace e2e-tests-projected-g2z88 deletion completed in 6.09407507s

• [SLOW TEST:10.366 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:20:38.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jul  5 12:20:38.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8gxqp'
Jul  5 12:20:38.728: INFO: stderr: ""
Jul  5 12:20:38.728: INFO: stdout: "pod/pause created\n"
Jul  5 12:20:38.728: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul  5 12:20:38.728: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-8gxqp" to be "running and ready"
Jul  5 12:20:38.742: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.008944ms
Jul  5 12:20:40.745: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017052315s
Jul  5 12:20:42.749: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.021364033s
Jul  5 12:20:42.749: INFO: Pod "pause" satisfied condition "running and ready"
Jul  5 12:20:42.749: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jul  5 12:20:42.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-8gxqp'
Jul  5 12:20:42.853: INFO: stderr: ""
Jul  5 12:20:42.853: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul  5 12:20:42.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8gxqp'
Jul  5 12:20:42.961: INFO: stderr: ""
Jul  5 12:20:42.961: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul  5 12:20:42.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-8gxqp'
Jul  5 12:20:43.065: INFO: stderr: ""
Jul  5 12:20:43.065: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul  5 12:20:43.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8gxqp'
Jul  5 12:20:43.163: INFO: stderr: ""
Jul  5 12:20:43.163: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jul  5 12:20:43.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8gxqp'
Jul  5 12:20:43.332: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 12:20:43.332: INFO: stdout: "pod \"pause\" force deleted\n"
Jul  5 12:20:43.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-8gxqp'
Jul  5 12:20:43.441: INFO: stderr: "No resources found.\n"
Jul  5 12:20:43.441: INFO: stdout: ""
Jul  5 12:20:43.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-8gxqp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  5 12:20:43.543: INFO: stderr: ""
Jul  5 12:20:43.543: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:20:43.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8gxqp" for this suite.
Jul  5 12:20:49.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:20:49.679: INFO: namespace: e2e-tests-kubectl-8gxqp, resource: bindings, ignored listing per whitelist
Jul  5 12:20:49.679: INFO: namespace e2e-tests-kubectl-8gxqp deletion completed in 6.131321547s

• [SLOW TEST:11.392 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:20:49.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zxxq9
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  5 12:20:49.761: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  5 12:21:15.946: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.253 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zxxq9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 12:21:15.946: INFO: >>> kubeConfig: /root/.kube/config
I0705 12:21:15.982799       6 log.go:172] (0xc0000ead10) (0xc001b414a0) Create stream
I0705 12:21:15.982825       6 log.go:172] (0xc0000ead10) (0xc001b414a0) Stream added, broadcasting: 1
I0705 12:21:15.984827       6 log.go:172] (0xc0000ead10) Reply frame received for 1
I0705 12:21:15.984887       6 log.go:172] (0xc0000ead10) (0xc001f1a640) Create stream
I0705 12:21:15.984903       6 log.go:172] (0xc0000ead10) (0xc001f1a640) Stream added, broadcasting: 3
I0705 12:21:15.986050       6 log.go:172] (0xc0000ead10) Reply frame received for 3
I0705 12:21:15.986079       6 log.go:172] (0xc0000ead10) (0xc001b41540) Create stream
I0705 12:21:15.986089       6 log.go:172] (0xc0000ead10) (0xc001b41540) Stream added, broadcasting: 5
I0705 12:21:15.986998       6 log.go:172] (0xc0000ead10) Reply frame received for 5
I0705 12:21:17.052669       6 log.go:172] (0xc0000ead10) Data frame received for 3
I0705 12:21:17.052710       6 log.go:172] (0xc001f1a640) (3) Data frame handling
I0705 12:21:17.052748       6 log.go:172] (0xc001f1a640) (3) Data frame sent
I0705 12:21:17.052759       6 log.go:172] (0xc0000ead10) Data frame received for 3
I0705 12:21:17.052768       6 log.go:172] (0xc001f1a640) (3) Data frame handling
I0705 12:21:17.052927       6 log.go:172] (0xc0000ead10) Data frame received for 5
I0705 12:21:17.052948       6 log.go:172] (0xc001b41540) (5) Data frame handling
I0705 12:21:17.055141       6 log.go:172] (0xc0000ead10) Data frame received for 1
I0705 12:21:17.055162       6 log.go:172] (0xc001b414a0) (1) Data frame handling
I0705 12:21:17.055172       6 log.go:172] (0xc001b414a0) (1) Data frame sent
I0705 12:21:17.055182       6 log.go:172] (0xc0000ead10) (0xc001b414a0) Stream removed, broadcasting: 1
I0705 12:21:17.055237       6 log.go:172] (0xc0000ead10) Go away received
I0705 12:21:17.055261       6 log.go:172] (0xc0000ead10) (0xc001b414a0) Stream removed, broadcasting: 1
I0705 12:21:17.055285       6 log.go:172] (0xc0000ead10) (0xc001f1a640) Stream removed, broadcasting: 3
I0705 12:21:17.055296       6 log.go:172] (0xc0000ead10) (0xc001b41540) Stream removed, broadcasting: 5
Jul  5 12:21:17.055: INFO: Found all expected endpoints: [netserver-0]
Jul  5 12:21:17.058: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.188 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zxxq9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 12:21:17.059: INFO: >>> kubeConfig: /root/.kube/config
I0705 12:21:17.092156       6 log.go:172] (0xc000ece000) (0xc001f1a820) Create stream
I0705 12:21:17.092194       6 log.go:172] (0xc000ece000) (0xc001f1a820) Stream added, broadcasting: 1
I0705 12:21:17.099266       6 log.go:172] (0xc000ece000) Reply frame received for 1
I0705 12:21:17.099310       6 log.go:172] (0xc000ece000) (0xc00198ef00) Create stream
I0705 12:21:17.099319       6 log.go:172] (0xc000ece000) (0xc00198ef00) Stream added, broadcasting: 3
I0705 12:21:17.100311       6 log.go:172] (0xc000ece000) Reply frame received for 3
I0705 12:21:17.100391       6 log.go:172] (0xc000ece000) (0xc0021e17c0) Create stream
I0705 12:21:17.100420       6 log.go:172] (0xc000ece000) (0xc0021e17c0) Stream added, broadcasting: 5
I0705 12:21:17.101584       6 log.go:172] (0xc000ece000) Reply frame received for 5
I0705 12:21:18.183269       6 log.go:172] (0xc000ece000) Data frame received for 3
I0705 12:21:18.183376       6 log.go:172] (0xc00198ef00) (3) Data frame handling
I0705 12:21:18.183435       6 log.go:172] (0xc00198ef00) (3) Data frame sent
I0705 12:21:18.183468       6 log.go:172] (0xc000ece000) Data frame received for 3
I0705 12:21:18.183491       6 log.go:172] (0xc00198ef00) (3) Data frame handling
I0705 12:21:18.183790       6 log.go:172] (0xc000ece000) Data frame received for 5
I0705 12:21:18.183822       6 log.go:172] (0xc0021e17c0) (5) Data frame handling
I0705 12:21:18.186020       6 log.go:172] (0xc000ece000) Data frame received for 1
I0705 12:21:18.186053       6 log.go:172] (0xc001f1a820) (1) Data frame handling
I0705 12:21:18.186076       6 log.go:172] (0xc001f1a820) (1) Data frame sent
I0705 12:21:18.186098       6 log.go:172] (0xc000ece000) (0xc001f1a820) Stream removed, broadcasting: 1
I0705 12:21:18.186122       6 log.go:172] (0xc000ece000) Go away received
I0705 12:21:18.186293       6 log.go:172] (0xc000ece000) (0xc001f1a820) Stream removed, broadcasting: 1
I0705 12:21:18.186320       6 log.go:172] (0xc000ece000) (0xc00198ef00) Stream removed, broadcasting: 3
I0705 12:21:18.186333       6 log.go:172] (0xc000ece000) (0xc0021e17c0) Stream removed, broadcasting: 5
Jul  5 12:21:18.186: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:21:18.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-zxxq9" for this suite.
Jul  5 12:21:42.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:21:42.356: INFO: namespace: e2e-tests-pod-network-test-zxxq9, resource: bindings, ignored listing per whitelist
Jul  5 12:21:42.426: INFO: namespace e2e-tests-pod-network-test-zxxq9 deletion completed in 24.234929629s

• [SLOW TEST:52.747 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:21:42.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jul  5 12:21:42.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul  5 12:21:42.727: INFO: stderr: ""
Jul  5 12:21:42.727: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:21:42.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bmd5l" for this suite.
Jul  5 12:21:48.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:21:48.811: INFO: namespace: e2e-tests-kubectl-bmd5l, resource: bindings, ignored listing per whitelist
Jul  5 12:21:48.832: INFO: namespace e2e-tests-kubectl-bmd5l deletion completed in 6.100768613s

• [SLOW TEST:6.405 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:21:48.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-nrgf
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 12:21:48.992: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nrgf" in namespace "e2e-tests-subpath-sl4bg" to be "success or failure"
Jul  5 12:21:48.995: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.206733ms
Jul  5 12:21:51.023: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031085116s
Jul  5 12:21:53.034: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042775258s
Jul  5 12:21:55.038: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=true. Elapsed: 6.046449763s
Jul  5 12:21:57.043: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=false. Elapsed: 8.050902282s
Jul  5 12:21:59.047: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=false. Elapsed: 10.05572588s
Jul  5 12:22:01.052: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=false. Elapsed: 12.060284232s
Jul  5 12:22:03.055: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=false. Elapsed: 14.06379419s
Jul  5 12:22:05.060: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=false. Elapsed: 16.068206693s
Jul  5 12:22:08.664: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=false. Elapsed: 19.672338804s
Jul  5 12:22:10.669: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=false. Elapsed: 21.676927824s
Jul  5 12:22:12.673: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=false. Elapsed: 23.681538273s
Jul  5 12:22:14.678: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Running", Reason="", readiness=false. Elapsed: 25.686169673s
Jul  5 12:22:16.687: INFO: Pod "pod-subpath-test-configmap-nrgf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.695830591s
STEP: Saw pod success
Jul  5 12:22:16.688: INFO: Pod "pod-subpath-test-configmap-nrgf" satisfied condition "success or failure"
Jul  5 12:22:16.691: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-nrgf container test-container-subpath-configmap-nrgf: 
STEP: delete the pod
Jul  5 12:22:16.733: INFO: Waiting for pod pod-subpath-test-configmap-nrgf to disappear
Jul  5 12:22:16.755: INFO: Pod pod-subpath-test-configmap-nrgf no longer exists
STEP: Deleting pod pod-subpath-test-configmap-nrgf
Jul  5 12:22:16.755: INFO: Deleting pod "pod-subpath-test-configmap-nrgf" in namespace "e2e-tests-subpath-sl4bg"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:22:16.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-sl4bg" for this suite.
Jul  5 12:22:22.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:22:22.835: INFO: namespace: e2e-tests-subpath-sl4bg, resource: bindings, ignored listing per whitelist
Jul  5 12:22:22.928: INFO: namespace e2e-tests-subpath-sl4bg deletion completed in 6.165645942s

• [SLOW TEST:34.096 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:22:22.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-2ddc7b75-beba-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume secrets
Jul  5 12:22:23.154: INFO: Waiting up to 5m0s for pod "pod-secrets-2de65c91-beba-11ea-9e48-0242ac110017" in namespace "e2e-tests-secrets-x57m6" to be "success or failure"
Jul  5 12:22:23.201: INFO: Pod "pod-secrets-2de65c91-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 47.239433ms
Jul  5 12:22:25.205: INFO: Pod "pod-secrets-2de65c91-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051427061s
Jul  5 12:22:27.209: INFO: Pod "pod-secrets-2de65c91-beba-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055243645s
STEP: Saw pod success
Jul  5 12:22:27.209: INFO: Pod "pod-secrets-2de65c91-beba-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:22:27.214: INFO: Trying to get logs from node hunter-worker pod pod-secrets-2de65c91-beba-11ea-9e48-0242ac110017 container secret-volume-test: 
STEP: delete the pod
Jul  5 12:22:27.231: INFO: Waiting for pod pod-secrets-2de65c91-beba-11ea-9e48-0242ac110017 to disappear
Jul  5 12:22:27.235: INFO: Pod pod-secrets-2de65c91-beba-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:22:27.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-x57m6" for this suite.
Jul  5 12:22:33.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:22:33.271: INFO: namespace: e2e-tests-secrets-x57m6, resource: bindings, ignored listing per whitelist
Jul  5 12:22:33.333: INFO: namespace e2e-tests-secrets-x57m6 deletion completed in 6.095527711s

• [SLOW TEST:10.406 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:22:33.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jul  5 12:22:33.439: INFO: Waiting up to 5m0s for pod "var-expansion-3407b430-beba-11ea-9e48-0242ac110017" in namespace "e2e-tests-var-expansion-wpr99" to be "success or failure"
Jul  5 12:22:33.502: INFO: Pod "var-expansion-3407b430-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 62.701232ms
Jul  5 12:22:35.640: INFO: Pod "var-expansion-3407b430-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200316276s
Jul  5 12:22:37.644: INFO: Pod "var-expansion-3407b430-beba-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204478656s
STEP: Saw pod success
Jul  5 12:22:37.644: INFO: Pod "var-expansion-3407b430-beba-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:22:37.647: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-3407b430-beba-11ea-9e48-0242ac110017 container dapi-container: 
STEP: delete the pod
Jul  5 12:22:37.743: INFO: Waiting for pod var-expansion-3407b430-beba-11ea-9e48-0242ac110017 to disappear
Jul  5 12:22:37.755: INFO: Pod var-expansion-3407b430-beba-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:22:37.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-wpr99" for this suite.
Jul  5 12:22:43.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:22:43.841: INFO: namespace: e2e-tests-var-expansion-wpr99, resource: bindings, ignored listing per whitelist
Jul  5 12:22:43.865: INFO: namespace e2e-tests-var-expansion-wpr99 deletion completed in 6.10602876s

• [SLOW TEST:10.531 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:22:43.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0705 12:22:45.210420       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  5 12:22:45.210: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:22:45.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-s4bfv" for this suite.
Jul  5 12:22:51.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:22:51.318: INFO: namespace: e2e-tests-gc-s4bfv, resource: bindings, ignored listing per whitelist
Jul  5 12:22:51.348: INFO: namespace e2e-tests-gc-s4bfv deletion completed in 6.134395837s

• [SLOW TEST:7.483 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:22:51.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:22:55.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-db6gl" for this suite.
Jul  5 12:23:47.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:23:47.566: INFO: namespace: e2e-tests-kubelet-test-db6gl, resource: bindings, ignored listing per whitelist
Jul  5 12:23:47.577: INFO: namespace e2e-tests-kubelet-test-db6gl deletion completed in 52.093188924s

• [SLOW TEST:56.228 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:23:47.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:23:53.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-rk24x" for this suite.
Jul  5 12:23:59.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:23:59.937: INFO: namespace: e2e-tests-namespaces-rk24x, resource: bindings, ignored listing per whitelist
Jul  5 12:23:59.984: INFO: namespace e2e-tests-namespaces-rk24x deletion completed in 6.091419402s
STEP: Destroying namespace "e2e-tests-nsdeletetest-rdpsc" for this suite.
Jul  5 12:23:59.986: INFO: Namespace e2e-tests-nsdeletetest-rdpsc was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-4r6bd" for this suite.
Jul  5 12:24:06.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:24:06.062: INFO: namespace: e2e-tests-nsdeletetest-4r6bd, resource: bindings, ignored listing per whitelist
Jul  5 12:24:06.111: INFO: namespace e2e-tests-nsdeletetest-4r6bd deletion completed in 6.124752809s

• [SLOW TEST:18.534 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:24:06.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul  5 12:24:06.305: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7j6fg,SelfLink:/api/v1/namespaces/e2e-tests-watch-7j6fg/configmaps/e2e-watch-test-watch-closed,UID:6b60bd0b-beba-11ea-a300-0242ac110004,ResourceVersion:237441,Generation:0,CreationTimestamp:2020-07-05 12:24:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  5 12:24:06.305: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7j6fg,SelfLink:/api/v1/namespaces/e2e-tests-watch-7j6fg/configmaps/e2e-watch-test-watch-closed,UID:6b60bd0b-beba-11ea-a300-0242ac110004,ResourceVersion:237442,Generation:0,CreationTimestamp:2020-07-05 12:24:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul  5 12:24:06.315: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7j6fg,SelfLink:/api/v1/namespaces/e2e-tests-watch-7j6fg/configmaps/e2e-watch-test-watch-closed,UID:6b60bd0b-beba-11ea-a300-0242ac110004,ResourceVersion:237443,Generation:0,CreationTimestamp:2020-07-05 12:24:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  5 12:24:06.315: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7j6fg,SelfLink:/api/v1/namespaces/e2e-tests-watch-7j6fg/configmaps/e2e-watch-test-watch-closed,UID:6b60bd0b-beba-11ea-a300-0242ac110004,ResourceVersion:237444,Generation:0,CreationTimestamp:2020-07-05 12:24:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:24:06.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-7j6fg" for this suite.
Jul  5 12:24:12.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:24:12.408: INFO: namespace: e2e-tests-watch-7j6fg, resource: bindings, ignored listing per whitelist
Jul  5 12:24:12.415: INFO: namespace e2e-tests-watch-7j6fg deletion completed in 6.090304583s

• [SLOW TEST:6.304 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:24:12.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-6f140ad4-beba-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 12:24:12.545: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f1a2196-beba-11ea-9e48-0242ac110017" in namespace "e2e-tests-configmap-fzghx" to be "success or failure"
Jul  5 12:24:12.562: INFO: Pod "pod-configmaps-6f1a2196-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.471538ms
Jul  5 12:24:14.628: INFO: Pod "pod-configmaps-6f1a2196-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083235849s
Jul  5 12:24:16.676: INFO: Pod "pod-configmaps-6f1a2196-beba-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131174713s
STEP: Saw pod success
Jul  5 12:24:16.676: INFO: Pod "pod-configmaps-6f1a2196-beba-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:24:16.699: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-6f1a2196-beba-11ea-9e48-0242ac110017 container configmap-volume-test: 
STEP: delete the pod
Jul  5 12:24:16.712: INFO: Waiting for pod pod-configmaps-6f1a2196-beba-11ea-9e48-0242ac110017 to disappear
Jul  5 12:24:16.717: INFO: Pod pod-configmaps-6f1a2196-beba-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:24:16.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fzghx" for this suite.
Jul  5 12:24:22.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:24:22.828: INFO: namespace: e2e-tests-configmap-fzghx, resource: bindings, ignored listing per whitelist
Jul  5 12:24:22.866: INFO: namespace e2e-tests-configmap-fzghx deletion completed in 6.145979128s

• [SLOW TEST:10.451 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:24:22.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-f7ldz
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-f7ldz
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-f7ldz
Jul  5 12:24:23.008: INFO: Found 0 stateful pods, waiting for 1
Jul  5 12:24:33.014: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul  5 12:24:33.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7ldz ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 12:24:33.293: INFO: stderr: "I0705 12:24:33.173627    2782 log.go:172] (0xc000162fd0) (0xc000573860) Create stream\nI0705 12:24:33.173699    2782 log.go:172] (0xc000162fd0) (0xc000573860) Stream added, broadcasting: 1\nI0705 12:24:33.178214    2782 log.go:172] (0xc000162fd0) Reply frame received for 1\nI0705 12:24:33.178281    2782 log.go:172] (0xc000162fd0) (0xc000572be0) Create stream\nI0705 12:24:33.178306    2782 log.go:172] (0xc000162fd0) (0xc000572be0) Stream added, broadcasting: 3\nI0705 12:24:33.179179    2782 log.go:172] (0xc000162fd0) Reply frame received for 3\nI0705 12:24:33.179221    2782 log.go:172] (0xc000162fd0) (0xc00059c000) Create stream\nI0705 12:24:33.179232    2782 log.go:172] (0xc000162fd0) (0xc00059c000) Stream added, broadcasting: 5\nI0705 12:24:33.180186    2782 log.go:172] (0xc000162fd0) Reply frame received for 5\nI0705 12:24:33.286128    2782 log.go:172] (0xc000162fd0) Data frame received for 3\nI0705 12:24:33.286185    2782 log.go:172] (0xc000572be0) (3) Data frame handling\nI0705 12:24:33.286210    2782 log.go:172] (0xc000572be0) (3) Data frame sent\nI0705 12:24:33.286234    2782 log.go:172] (0xc000162fd0) Data frame received for 3\nI0705 12:24:33.286251    2782 log.go:172] (0xc000572be0) (3) Data frame handling\nI0705 12:24:33.286285    2782 log.go:172] (0xc000162fd0) Data frame received for 5\nI0705 12:24:33.286323    2782 log.go:172] (0xc00059c000) (5) Data frame handling\nI0705 12:24:33.288688    2782 log.go:172] (0xc000162fd0) Data frame received for 1\nI0705 12:24:33.288727    2782 log.go:172] (0xc000573860) (1) Data frame handling\nI0705 12:24:33.288757    2782 log.go:172] (0xc000573860) (1) Data frame sent\nI0705 12:24:33.288784    2782 log.go:172] (0xc000162fd0) (0xc000573860) Stream removed, broadcasting: 1\nI0705 12:24:33.288937    2782 log.go:172] (0xc000162fd0) Go away received\nI0705 12:24:33.288995    2782 log.go:172] (0xc000162fd0) (0xc000573860) Stream removed, broadcasting: 1\nI0705 12:24:33.289018    2782 log.go:172] (0xc000162fd0) (0xc000572be0) Stream removed, broadcasting: 3\nI0705 12:24:33.289031    2782 log.go:172] (0xc000162fd0) (0xc00059c000) Stream removed, broadcasting: 5\n"
Jul  5 12:24:33.293: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 12:24:33.293: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 12:24:33.297: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  5 12:24:43.302: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 12:24:43.302: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 12:24:43.317: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 12:24:43.317: INFO: ss-0  hunter-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:24:43.317: INFO: 
Jul  5 12:24:43.317: INFO: StatefulSet ss has not reached scale 3, at 1
Jul  5 12:24:44.348: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995517327s
Jul  5 12:24:45.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.964878189s
Jul  5 12:24:46.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.959892206s
Jul  5 12:24:47.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.937674051s
Jul  5 12:24:48.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.933481526s
Jul  5 12:24:49.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.928673469s
Jul  5 12:24:50.394: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.923207824s
Jul  5 12:24:51.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.919136939s
Jul  5 12:24:52.405: INFO: Verifying statefulset ss doesn't scale past 3 for another 913.523061ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-f7ldz
Jul  5 12:24:53.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7ldz ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 12:24:53.672: INFO: stderr: "I0705 12:24:53.555746    2804 log.go:172] (0xc00083a2c0) (0xc00074e640) Create stream\nI0705 12:24:53.555801    2804 log.go:172] (0xc00083a2c0) (0xc00074e640) Stream added, broadcasting: 1\nI0705 12:24:53.558253    2804 log.go:172] (0xc00083a2c0) Reply frame received for 1\nI0705 12:24:53.558308    2804 log.go:172] (0xc00083a2c0) (0xc00074e6e0) Create stream\nI0705 12:24:53.558322    2804 log.go:172] (0xc00083a2c0) (0xc00074e6e0) Stream added, broadcasting: 3\nI0705 12:24:53.559275    2804 log.go:172] (0xc00083a2c0) Reply frame received for 3\nI0705 12:24:53.559311    2804 log.go:172] (0xc00083a2c0) (0xc0005dedc0) Create stream\nI0705 12:24:53.559322    2804 log.go:172] (0xc00083a2c0) (0xc0005dedc0) Stream added, broadcasting: 5\nI0705 12:24:53.560203    2804 log.go:172] (0xc00083a2c0) Reply frame received for 5\nI0705 12:24:53.665554    2804 log.go:172] (0xc00083a2c0) Data frame received for 5\nI0705 12:24:53.665610    2804 log.go:172] (0xc0005dedc0) (5) Data frame handling\nI0705 12:24:53.665651    2804 log.go:172] (0xc00083a2c0) Data frame received for 3\nI0705 12:24:53.665670    2804 log.go:172] (0xc00074e6e0) (3) Data frame handling\nI0705 12:24:53.665697    2804 log.go:172] (0xc00074e6e0) (3) Data frame sent\nI0705 12:24:53.665713    2804 log.go:172] (0xc00083a2c0) Data frame received for 3\nI0705 12:24:53.665730    2804 log.go:172] (0xc00074e6e0) (3) Data frame handling\nI0705 12:24:53.667580    2804 log.go:172] (0xc00083a2c0) Data frame received for 1\nI0705 12:24:53.667619    2804 log.go:172] (0xc00074e640) (1) Data frame handling\nI0705 12:24:53.667637    2804 log.go:172] (0xc00074e640) (1) Data frame sent\nI0705 12:24:53.667660    2804 log.go:172] (0xc00083a2c0) (0xc00074e640) Stream removed, broadcasting: 1\nI0705 12:24:53.667741    2804 log.go:172] (0xc00083a2c0) Go away received\nI0705 12:24:53.667861    2804 log.go:172] (0xc00083a2c0) (0xc00074e640) Stream removed, broadcasting: 1\nI0705 12:24:53.667879    2804 log.go:172] (0xc00083a2c0) (0xc00074e6e0) Stream removed, broadcasting: 3\nI0705 12:24:53.667890    2804 log.go:172] (0xc00083a2c0) (0xc0005dedc0) Stream removed, broadcasting: 5\n"
Jul  5 12:24:53.672: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 12:24:53.672: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 12:24:53.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7ldz ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 12:24:53.869: INFO: stderr: "I0705 12:24:53.797569    2826 log.go:172] (0xc000798160) (0xc0005da640) Create stream\nI0705 12:24:53.797669    2826 log.go:172] (0xc000798160) (0xc0005da640) Stream added, broadcasting: 1\nI0705 12:24:53.800570    2826 log.go:172] (0xc000798160) Reply frame received for 1\nI0705 12:24:53.800615    2826 log.go:172] (0xc000798160) (0xc0005fa000) Create stream\nI0705 12:24:53.800629    2826 log.go:172] (0xc000798160) (0xc0005fa000) Stream added, broadcasting: 3\nI0705 12:24:53.802057    2826 log.go:172] (0xc000798160) Reply frame received for 3\nI0705 12:24:53.802093    2826 log.go:172] (0xc000798160) (0xc000120dc0) Create stream\nI0705 12:24:53.802117    2826 log.go:172] (0xc000798160) (0xc000120dc0) Stream added, broadcasting: 5\nI0705 12:24:53.802938    2826 log.go:172] (0xc000798160) Reply frame received for 5\nI0705 12:24:53.862439    2826 log.go:172] (0xc000798160) Data frame received for 5\nI0705 12:24:53.862638    2826 log.go:172] (0xc000120dc0) (5) Data frame handling\nI0705 12:24:53.862664    2826 log.go:172] (0xc000120dc0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0705 12:24:53.862686    2826 log.go:172] (0xc000798160) Data frame received for 3\nI0705 12:24:53.862702    2826 log.go:172] (0xc0005fa000) (3) Data frame handling\nI0705 12:24:53.862733    2826 log.go:172] (0xc000798160) Data frame received for 5\nI0705 12:24:53.862789    2826 log.go:172] (0xc000120dc0) (5) Data frame handling\nI0705 12:24:53.862826    2826 log.go:172] (0xc0005fa000) (3) Data frame sent\nI0705 12:24:53.862843    2826 log.go:172] (0xc000798160) Data frame received for 3\nI0705 12:24:53.862859    2826 log.go:172] (0xc0005fa000) (3) Data frame handling\nI0705 12:24:53.864383    2826 log.go:172] (0xc000798160) Data frame received for 1\nI0705 12:24:53.864397    2826 log.go:172] (0xc0005da640) (1) Data frame handling\nI0705 12:24:53.864424    2826 log.go:172] (0xc0005da640) (1) Data frame sent\nI0705 12:24:53.864435    2826 log.go:172] (0xc000798160) (0xc0005da640) Stream removed, broadcasting: 1\nI0705 12:24:53.864567    2826 log.go:172] (0xc000798160) (0xc0005da640) Stream removed, broadcasting: 1\nI0705 12:24:53.864583    2826 log.go:172] (0xc000798160) (0xc0005fa000) Stream removed, broadcasting: 3\nI0705 12:24:53.864651    2826 log.go:172] (0xc000798160) Go away received\nI0705 12:24:53.864700    2826 log.go:172] (0xc000798160) (0xc000120dc0) Stream removed, broadcasting: 5\n"
Jul  5 12:24:53.869: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 12:24:53.869: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 12:24:53.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7ldz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 12:24:54.073: INFO: stderr: "I0705 12:24:53.999031    2848 log.go:172] (0xc000138580) (0xc0005fe640) Create stream\nI0705 12:24:53.999109    2848 log.go:172] (0xc000138580) (0xc0005fe640) Stream added, broadcasting: 1\nI0705 12:24:54.001927    2848 log.go:172] (0xc000138580) Reply frame received for 1\nI0705 12:24:54.001978    2848 log.go:172] (0xc000138580) (0xc0005fe6e0) Create stream\nI0705 12:24:54.001993    2848 log.go:172] (0xc000138580) (0xc0005fe6e0) Stream added, broadcasting: 3\nI0705 12:24:54.002976    2848 log.go:172] (0xc000138580) Reply frame received for 3\nI0705 12:24:54.003015    2848 log.go:172] (0xc000138580) (0xc0005fe780) Create stream\nI0705 12:24:54.003028    2848 log.go:172] (0xc000138580) (0xc0005fe780) Stream added, broadcasting: 5\nI0705 12:24:54.004096    2848 log.go:172] (0xc000138580) Reply frame received for 5\nI0705 12:24:54.068815    2848 log.go:172] (0xc000138580) Data frame received for 3\nI0705 12:24:54.068855    2848 log.go:172] (0xc0005fe6e0) (3) Data frame handling\nI0705 12:24:54.068884    2848 log.go:172] (0xc000138580) Data frame received for 5\nI0705 12:24:54.068919    2848 log.go:172] (0xc0005fe780) (5) Data frame handling\nI0705 12:24:54.068927    2848 log.go:172] (0xc0005fe780) (5) Data frame sent\nI0705 12:24:54.068933    2848 log.go:172] (0xc000138580) Data frame received for 5\nI0705 12:24:54.068936    2848 log.go:172] (0xc0005fe780) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0705 12:24:54.068954    2848 log.go:172] (0xc0005fe6e0) (3) Data frame sent\nI0705 12:24:54.068959    2848 log.go:172] (0xc000138580) Data frame received for 3\nI0705 12:24:54.068963    2848 log.go:172] (0xc0005fe6e0) (3) Data frame handling\nI0705 12:24:54.070333    2848 log.go:172] (0xc000138580) Data frame received for 1\nI0705 12:24:54.070351    2848 log.go:172] (0xc0005fe640) (1) Data frame handling\nI0705 12:24:54.070368    2848 log.go:172] (0xc0005fe640) (1) Data frame sent\nI0705 12:24:54.070394    2848 log.go:172] (0xc000138580) (0xc0005fe640) Stream removed, broadcasting: 1\nI0705 12:24:54.070410    2848 log.go:172] (0xc000138580) Go away received\nI0705 12:24:54.070545    2848 log.go:172] (0xc000138580) (0xc0005fe640) Stream removed, broadcasting: 1\nI0705 12:24:54.070558    2848 log.go:172] (0xc000138580) (0xc0005fe6e0) Stream removed, broadcasting: 3\nI0705 12:24:54.070565    2848 log.go:172] (0xc000138580) (0xc0005fe780) Stream removed, broadcasting: 5\n"
Jul  5 12:24:54.073: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 12:24:54.073: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 12:24:54.077: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jul  5 12:25:04.082: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 12:25:04.082: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 12:25:04.082: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul  5 12:25:04.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7ldz ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 12:25:04.299: INFO: stderr: "I0705 12:25:04.214920    2871 log.go:172] (0xc000138840) (0xc0006212c0) Create stream\nI0705 12:25:04.214971    2871 log.go:172] (0xc000138840) (0xc0006212c0) Stream added, broadcasting: 1\nI0705 12:25:04.216744    2871 log.go:172] (0xc000138840) Reply frame received for 1\nI0705 12:25:04.216783    2871 log.go:172] (0xc000138840) (0xc000714000) Create stream\nI0705 12:25:04.216794    2871 log.go:172] (0xc000138840) (0xc000714000) Stream added, broadcasting: 3\nI0705 12:25:04.218108    2871 log.go:172] (0xc000138840) Reply frame received for 3\nI0705 12:25:04.218172    2871 log.go:172] (0xc000138840) (0xc0003fc000) Create stream\nI0705 12:25:04.218206    2871 log.go:172] (0xc000138840) (0xc0003fc000) Stream added, broadcasting: 5\nI0705 12:25:04.219079    2871 log.go:172] (0xc000138840) Reply frame received for 5\nI0705 12:25:04.292467    2871 log.go:172] (0xc000138840) Data frame received for 5\nI0705 12:25:04.292515    2871 log.go:172] (0xc0003fc000) (5) Data frame handling\nI0705 12:25:04.292554    2871 log.go:172] (0xc000138840) Data frame received for 3\nI0705 12:25:04.292580    2871 log.go:172] (0xc000714000) (3) Data frame handling\nI0705 12:25:04.292611    2871 log.go:172] (0xc000714000) (3) Data frame sent\nI0705 12:25:04.292636    2871 log.go:172] (0xc000138840) Data frame received for 3\nI0705 12:25:04.292648    2871 log.go:172] (0xc000714000) (3) Data frame handling\nI0705 12:25:04.294558    2871 log.go:172] (0xc000138840) Data frame received for 1\nI0705 12:25:04.294607    2871 log.go:172] (0xc0006212c0) (1) Data frame handling\nI0705 12:25:04.294627    2871 log.go:172] (0xc0006212c0) (1) Data frame sent\nI0705 12:25:04.294654    2871 log.go:172] (0xc000138840) (0xc0006212c0) Stream removed, broadcasting: 1\nI0705 12:25:04.294687    2871 log.go:172] (0xc000138840) Go away received\nI0705 12:25:04.294889    2871 log.go:172] (0xc000138840) (0xc0006212c0) Stream removed, broadcasting: 1\nI0705 12:25:04.294919    2871 log.go:172] (0xc000138840) (0xc000714000) Stream removed, broadcasting: 3\nI0705 12:25:04.294933    2871 log.go:172] (0xc000138840) (0xc0003fc000) Stream removed, broadcasting: 5\n"
Jul  5 12:25:04.299: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 12:25:04.299: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 12:25:04.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7ldz ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 12:25:04.575: INFO: stderr: "I0705 12:25:04.423107    2893 log.go:172] (0xc000138790) (0xc0007a94a0) Create stream\nI0705 12:25:04.423155    2893 log.go:172] (0xc000138790) (0xc0007a94a0) Stream added, broadcasting: 1\nI0705 12:25:04.425668    2893 log.go:172] (0xc000138790) Reply frame received for 1\nI0705 12:25:04.425832    2893 log.go:172] (0xc000138790) (0xc000278000) Create stream\nI0705 12:25:04.425850    2893 log.go:172] (0xc000138790) (0xc000278000) Stream added, broadcasting: 3\nI0705 12:25:04.426831    2893 log.go:172] (0xc000138790) Reply frame received for 3\nI0705 12:25:04.426868    2893 log.go:172] (0xc000138790) (0xc0002c6000) Create stream\nI0705 12:25:04.426878    2893 log.go:172] (0xc000138790) (0xc0002c6000) Stream added, broadcasting: 5\nI0705 12:25:04.427877    2893 log.go:172] (0xc000138790) Reply frame received for 5\nI0705 12:25:04.569754    2893 log.go:172] (0xc000138790) Data frame received for 3\nI0705 12:25:04.569798    2893 log.go:172] (0xc000278000) (3) Data frame handling\nI0705 12:25:04.569834    2893 log.go:172] (0xc000278000) (3) Data frame sent\nI0705 12:25:04.569853    2893 log.go:172] (0xc000138790) Data frame received for 3\nI0705 12:25:04.569870    2893 log.go:172] (0xc000278000) (3) Data frame handling\nI0705 12:25:04.569934    2893 log.go:172] (0xc000138790) Data frame received for 5\nI0705 12:25:04.569956    2893 log.go:172] (0xc0002c6000) (5) Data frame handling\nI0705 12:25:04.571617    2893 log.go:172] (0xc000138790) Data frame received for 1\nI0705 12:25:04.571629    2893 log.go:172] (0xc0007a94a0) (1) Data frame handling\nI0705 12:25:04.571636    2893 log.go:172] (0xc0007a94a0) (1) Data frame sent\nI0705 12:25:04.571643    2893 log.go:172] (0xc000138790) (0xc0007a94a0) Stream removed, broadcasting: 1\nI0705 12:25:04.571710    2893 log.go:172] (0xc000138790) Go away received\nI0705 12:25:04.571791    2893 log.go:172] (0xc000138790) (0xc0007a94a0) Stream removed, broadcasting: 1\nI0705 12:25:04.571802    2893 log.go:172] (0xc000138790) (0xc000278000) Stream removed, broadcasting: 3\nI0705 12:25:04.571808    2893 log.go:172] (0xc000138790) (0xc0002c6000) Stream removed, broadcasting: 5\n"
Jul  5 12:25:04.576: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 12:25:04.576: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 12:25:04.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7ldz ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 12:25:04.822: INFO: stderr: "I0705 12:25:04.706583    2915 log.go:172] (0xc000138840) (0xc000774640) Create stream\nI0705 12:25:04.706641    2915 log.go:172] (0xc000138840) (0xc000774640) Stream added, broadcasting: 1\nI0705 12:25:04.708626    2915 log.go:172] (0xc000138840) Reply frame received for 1\nI0705 12:25:04.708683    2915 log.go:172] (0xc000138840) (0xc0000eac80) Create stream\nI0705 12:25:04.708700    2915 log.go:172] (0xc000138840) (0xc0000eac80) Stream added, broadcasting: 3\nI0705 12:25:04.709721    2915 log.go:172] (0xc000138840) Reply frame received for 3\nI0705 12:25:04.709761    2915 log.go:172] (0xc000138840) (0xc0007746e0) Create stream\nI0705 12:25:04.709777    2915 log.go:172] (0xc000138840) (0xc0007746e0) Stream added, broadcasting: 5\nI0705 12:25:04.710588    2915 log.go:172] (0xc000138840) Reply frame received for 5\nI0705 12:25:04.815561    2915 log.go:172] (0xc000138840) Data frame received for 3\nI0705 12:25:04.815598    2915 log.go:172] (0xc0000eac80) (3) Data frame handling\nI0705 12:25:04.815618    2915 log.go:172] (0xc0000eac80) (3) Data frame sent\nI0705 12:25:04.815626    2915 log.go:172] (0xc000138840) Data frame received for 3\nI0705 12:25:04.815638    2915 log.go:172] (0xc0000eac80) (3) Data frame handling\nI0705 12:25:04.815660    2915 log.go:172] (0xc000138840) Data frame received for 5\nI0705 12:25:04.815682    2915 log.go:172] (0xc0007746e0) (5) Data frame handling\nI0705 12:25:04.818021    2915 log.go:172] (0xc000138840) Data frame received for 1\nI0705 12:25:04.818032    2915 log.go:172] (0xc000774640) (1) Data frame handling\nI0705 12:25:04.818038    2915 log.go:172] (0xc000774640) (1) Data frame sent\nI0705 12:25:04.818088    2915 log.go:172] (0xc000138840) (0xc000774640) Stream removed, broadcasting: 1\nI0705 12:25:04.818122    2915 log.go:172] (0xc000138840) Go away received\nI0705 12:25:04.818209    2915 log.go:172] (0xc000138840) (0xc000774640) Stream removed, broadcasting: 1\nI0705 12:25:04.818219    2915 log.go:172] (0xc000138840) (0xc0000eac80) Stream removed, broadcasting: 3\nI0705 12:25:04.818225    2915 log.go:172] (0xc000138840) (0xc0007746e0) Stream removed, broadcasting: 5\n"
Jul  5 12:25:04.822: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 12:25:04.822: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 12:25:04.822: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 12:25:04.825: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul  5 12:25:14.833: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 12:25:14.833: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 12:25:14.833: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 12:25:14.846: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  5 12:25:14.846: INFO: ss-0  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:25:14.846: INFO: ss-1  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:14.846: INFO: ss-2  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:14.846: INFO: 
Jul  5 12:25:14.846: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 12:25:15.857: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  5 12:25:15.857: INFO: ss-0  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:25:15.857: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:15.858: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:15.858: INFO: 
Jul  5 12:25:15.858: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 12:25:16.862: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  5 12:25:16.862: INFO: ss-0  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:25:16.863: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:16.863: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:16.863: INFO: 
Jul  5 12:25:16.863: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 12:25:17.868: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  5 12:25:17.868: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:25:17.868: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:17.868: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:17.868: INFO: 
Jul  5 12:25:17.868: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 12:25:18.874: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  5 12:25:18.874: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:25:18.874: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:18.874: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:18.874: INFO: 
Jul  5 12:25:18.874: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 12:25:19.880: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  5 12:25:19.880: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:25:19.880: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:19.880: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:19.880: INFO: 
Jul  5 12:25:19.880: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 12:25:20.886: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  5 12:25:20.886: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:25:20.886: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:20.886: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:20.886: INFO: 
Jul  5 12:25:20.886: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 12:25:21.891: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  5 12:25:21.891: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:25:21.892: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:21.892: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:21.892: INFO: 
Jul  5 12:25:21.892: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 12:25:22.898: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  5 12:25:22.898: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:23 +0000 UTC  }]
Jul  5 12:25:22.898: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:22.898: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:25:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:24:43 +0000 UTC  }]
Jul  5 12:25:22.898: INFO: 
Jul  5 12:25:22.898: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 12:25:23.902: INFO: Verifying statefulset ss doesn't scale past 0 for another 941.927139ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-f7ldz
Jul  5 12:25:24.905: INFO: Scaling statefulset ss to 0
Jul  5 12:25:24.915: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul  5 12:25:24.918: INFO: Deleting all statefulset in ns e2e-tests-statefulset-f7ldz
Jul  5 12:25:24.920: INFO: Scaling statefulset ss to 0
Jul  5 12:25:24.928: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 12:25:24.930: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:25:24.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-f7ldz" for this suite.
Jul  5 12:25:30.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:25:31.018: INFO: namespace: e2e-tests-statefulset-f7ldz, resource: bindings, ignored listing per whitelist
Jul  5 12:25:31.058: INFO: namespace e2e-tests-statefulset-f7ldz deletion completed in 6.091808008s

• [SLOW TEST:68.192 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:25:31.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  5 12:25:31.165: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9df5267f-beba-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-txl8p" to be "success or failure"
Jul  5 12:25:31.181: INFO: Pod "downwardapi-volume-9df5267f-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.135367ms
Jul  5 12:25:33.188: INFO: Pod "downwardapi-volume-9df5267f-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022352825s
Jul  5 12:25:35.192: INFO: Pod "downwardapi-volume-9df5267f-beba-11ea-9e48-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.02680425s
Jul  5 12:25:37.197: INFO: Pod "downwardapi-volume-9df5267f-beba-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031138996s
STEP: Saw pod success
Jul  5 12:25:37.197: INFO: Pod "downwardapi-volume-9df5267f-beba-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:25:37.200: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-9df5267f-beba-11ea-9e48-0242ac110017 container client-container: 
STEP: delete the pod
Jul  5 12:25:37.254: INFO: Waiting for pod downwardapi-volume-9df5267f-beba-11ea-9e48-0242ac110017 to disappear
Jul  5 12:25:37.258: INFO: Pod downwardapi-volume-9df5267f-beba-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:25:37.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-txl8p" for this suite.
Jul  5 12:25:43.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:25:43.329: INFO: namespace: e2e-tests-projected-txl8p, resource: bindings, ignored listing per whitelist
Jul  5 12:25:43.371: INFO: namespace e2e-tests-projected-txl8p deletion completed in 6.109450018s

• [SLOW TEST:12.313 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:25:43.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  5 12:25:43.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a54cf806-beba-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-b9xk9" to be "success or failure"
Jul  5 12:25:43.574: INFO: Pod "downwardapi-volume-a54cf806-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 73.533469ms
Jul  5 12:25:45.619: INFO: Pod "downwardapi-volume-a54cf806-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118808016s
Jul  5 12:25:47.624: INFO: Pod "downwardapi-volume-a54cf806-beba-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123761289s
STEP: Saw pod success
Jul  5 12:25:47.624: INFO: Pod "downwardapi-volume-a54cf806-beba-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:25:47.627: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a54cf806-beba-11ea-9e48-0242ac110017 container client-container: 
STEP: delete the pod
Jul  5 12:25:47.663: INFO: Waiting for pod downwardapi-volume-a54cf806-beba-11ea-9e48-0242ac110017 to disappear
Jul  5 12:25:47.669: INFO: Pod downwardapi-volume-a54cf806-beba-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:25:47.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b9xk9" for this suite.
Jul  5 12:25:53.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:25:53.794: INFO: namespace: e2e-tests-projected-b9xk9, resource: bindings, ignored listing per whitelist
Jul  5 12:25:53.804: INFO: namespace e2e-tests-projected-b9xk9 deletion completed in 6.13081991s

• [SLOW TEST:10.433 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:25:53.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  5 12:25:53.960: INFO: Waiting up to 5m0s for pod "pod-ab8ae2a3-beba-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-ndd9s" to be "success or failure"
Jul  5 12:25:53.962: INFO: Pod "pod-ab8ae2a3-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.64642ms
Jul  5 12:25:55.967: INFO: Pod "pod-ab8ae2a3-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007132431s
Jul  5 12:25:57.970: INFO: Pod "pod-ab8ae2a3-beba-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010451663s
STEP: Saw pod success
Jul  5 12:25:57.970: INFO: Pod "pod-ab8ae2a3-beba-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:25:57.972: INFO: Trying to get logs from node hunter-worker2 pod pod-ab8ae2a3-beba-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 12:25:58.020: INFO: Waiting for pod pod-ab8ae2a3-beba-11ea-9e48-0242ac110017 to disappear
Jul  5 12:25:58.032: INFO: Pod pod-ab8ae2a3-beba-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:25:58.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ndd9s" for this suite.
Jul  5 12:26:04.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:26:04.092: INFO: namespace: e2e-tests-emptydir-ndd9s, resource: bindings, ignored listing per whitelist
Jul  5 12:26:04.118: INFO: namespace e2e-tests-emptydir-ndd9s deletion completed in 6.08149318s

• [SLOW TEST:10.314 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:26:04.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  5 12:26:04.249: INFO: Waiting up to 5m0s for pod "pod-b1aa18d6-beba-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-bwjps" to be "success or failure"
Jul  5 12:26:04.254: INFO: Pod "pod-b1aa18d6-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.662516ms
Jul  5 12:26:06.258: INFO: Pod "pod-b1aa18d6-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009888338s
Jul  5 12:26:08.263: INFO: Pod "pod-b1aa18d6-beba-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014144007s
STEP: Saw pod success
Jul  5 12:26:08.263: INFO: Pod "pod-b1aa18d6-beba-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:26:08.266: INFO: Trying to get logs from node hunter-worker pod pod-b1aa18d6-beba-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 12:26:08.303: INFO: Waiting for pod pod-b1aa18d6-beba-11ea-9e48-0242ac110017 to disappear
Jul  5 12:26:08.366: INFO: Pod pod-b1aa18d6-beba-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:26:08.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bwjps" for this suite.
Jul  5 12:26:14.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:26:14.454: INFO: namespace: e2e-tests-emptydir-bwjps, resource: bindings, ignored listing per whitelist
Jul  5 12:26:14.493: INFO: namespace e2e-tests-emptydir-bwjps deletion completed in 6.122663472s

• [SLOW TEST:10.374 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:26:14.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jul  5 12:26:14.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-vqdm7 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul  5 12:26:20.553: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0705 12:26:20.491811    2938 log.go:172] (0xc000a5c0b0) (0xc000a70000) Create stream\nI0705 12:26:20.491840    2938 log.go:172] (0xc000a5c0b0) (0xc000a70000) Stream added, broadcasting: 1\nI0705 12:26:20.496262    2938 log.go:172] (0xc000a5c0b0) Reply frame received for 1\nI0705 12:26:20.496305    2938 log.go:172] (0xc000a5c0b0) (0xc00043e6e0) Create stream\nI0705 12:26:20.496316    2938 log.go:172] (0xc000a5c0b0) (0xc00043e6e0) Stream added, broadcasting: 3\nI0705 12:26:20.497628    2938 log.go:172] (0xc000a5c0b0) Reply frame received for 3\nI0705 12:26:20.497686    2938 log.go:172] (0xc000a5c0b0) (0xc00043e780) Create stream\nI0705 12:26:20.497705    2938 log.go:172] (0xc000a5c0b0) (0xc00043e780) Stream added, broadcasting: 5\nI0705 12:26:20.498492    2938 log.go:172] (0xc000a5c0b0) Reply frame received for 5\nI0705 12:26:20.498522    2938 log.go:172] (0xc000a5c0b0) (0xc000a700a0) Create stream\nI0705 12:26:20.498532    2938 log.go:172] (0xc000a5c0b0) (0xc000a700a0) Stream added, broadcasting: 7\nI0705 12:26:20.499367    2938 log.go:172] (0xc000a5c0b0) Reply frame received for 7\nI0705 12:26:20.499481    2938 log.go:172] (0xc00043e6e0) (3) Writing data frame\nI0705 12:26:20.499563    2938 log.go:172] (0xc00043e6e0) (3) Writing data frame\nI0705 12:26:20.500308    2938 log.go:172] (0xc000a5c0b0) Data frame received for 5\nI0705 12:26:20.500331    2938 log.go:172] (0xc00043e780) (5) Data frame handling\nI0705 12:26:20.500348    2938 log.go:172] (0xc00043e780) (5) Data frame sent\nI0705 12:26:20.501065    2938 log.go:172] (0xc000a5c0b0) Data frame received for 5\nI0705 12:26:20.501256    2938 log.go:172] (0xc00043e780) (5) Data frame handling\nI0705 12:26:20.501282    2938 log.go:172] (0xc00043e780) (5) Data frame sent\nI0705 12:26:20.526735    2938 log.go:172] (0xc000a5c0b0) Data frame received for 7\nI0705 12:26:20.526770    2938 log.go:172] (0xc000a700a0) (7) Data frame handling\nI0705 12:26:20.526948    2938 log.go:172] (0xc000a5c0b0) Data frame received for 5\nI0705 12:26:20.526985    2938 log.go:172] (0xc00043e780) (5) Data frame handling\nI0705 12:26:20.527573    2938 log.go:172] (0xc000a5c0b0) Data frame received for 1\nI0705 12:26:20.527596    2938 log.go:172] (0xc000a70000) (1) Data frame handling\nI0705 12:26:20.527609    2938 log.go:172] (0xc000a70000) (1) Data frame sent\nI0705 12:26:20.527639    2938 log.go:172] (0xc000a5c0b0) (0xc00043e6e0) Stream removed, broadcasting: 3\nI0705 12:26:20.527683    2938 log.go:172] (0xc000a5c0b0) (0xc000a70000) Stream removed, broadcasting: 1\nI0705 12:26:20.527726    2938 log.go:172] (0xc000a5c0b0) Go away received\nI0705 12:26:20.527941    2938 log.go:172] (0xc000a5c0b0) (0xc000a70000) Stream removed, broadcasting: 1\nI0705 12:26:20.527981    2938 log.go:172] (0xc000a5c0b0) (0xc00043e6e0) Stream removed, broadcasting: 3\nI0705 12:26:20.528003    2938 log.go:172] (0xc000a5c0b0) (0xc00043e780) Stream removed, broadcasting: 5\nI0705 12:26:20.528026    2938 log.go:172] (0xc000a5c0b0) (0xc000a700a0) Stream removed, broadcasting: 7\n"
Jul  5 12:26:20.553: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:26:22.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vqdm7" for this suite.
Jul  5 12:26:28.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:26:28.615: INFO: namespace: e2e-tests-kubectl-vqdm7, resource: bindings, ignored listing per whitelist
Jul  5 12:26:28.679: INFO: namespace e2e-tests-kubectl-vqdm7 deletion completed in 6.097105207s

• [SLOW TEST:14.186 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:26:28.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-c0501a41-beba-11ea-9e48-0242ac110017
STEP: Creating secret with name s-test-opt-upd-c050211d-beba-11ea-9e48-0242ac110017
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c0501a41-beba-11ea-9e48-0242ac110017
STEP: Updating secret s-test-opt-upd-c050211d-beba-11ea-9e48-0242ac110017
STEP: Creating secret with name s-test-opt-create-c050216f-beba-11ea-9e48-0242ac110017
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:26:39.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pxc8f" for this suite.
Jul  5 12:27:01.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:27:01.089: INFO: namespace: e2e-tests-projected-pxc8f, resource: bindings, ignored listing per whitelist
Jul  5 12:27:01.162: INFO: namespace e2e-tests-projected-pxc8f deletion completed in 22.117133244s

• [SLOW TEST:32.483 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:27:01.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-d3aa3f88-beba-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume configMaps
Jul  5 12:27:01.294: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d3af2c15-beba-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-ggdrk" to be "success or failure"
Jul  5 12:27:01.310: INFO: Pod "pod-projected-configmaps-d3af2c15-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.344698ms
Jul  5 12:27:03.313: INFO: Pod "pod-projected-configmaps-d3af2c15-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018677467s
Jul  5 12:27:05.379: INFO: Pod "pod-projected-configmaps-d3af2c15-beba-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084166425s
Jul  5 12:27:07.383: INFO: Pod "pod-projected-configmaps-d3af2c15-beba-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088434087s
STEP: Saw pod success
Jul  5 12:27:07.383: INFO: Pod "pod-projected-configmaps-d3af2c15-beba-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:27:07.386: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-d3af2c15-beba-11ea-9e48-0242ac110017 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 12:27:07.419: INFO: Waiting for pod pod-projected-configmaps-d3af2c15-beba-11ea-9e48-0242ac110017 to disappear
Jul  5 12:27:07.429: INFO: Pod pod-projected-configmaps-d3af2c15-beba-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:27:07.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ggdrk" for this suite.
Jul  5 12:27:13.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:27:13.502: INFO: namespace: e2e-tests-projected-ggdrk, resource: bindings, ignored listing per whitelist
Jul  5 12:27:13.518: INFO: namespace e2e-tests-projected-ggdrk deletion completed in 6.086071623s

• [SLOW TEST:12.356 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:27:13.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul  5 12:27:18.226: INFO: Successfully updated pod "labelsupdatedb06fbb4-beba-11ea-9e48-0242ac110017"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:27:20.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dj822" for this suite.
Jul  5 12:27:42.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:27:42.353: INFO: namespace: e2e-tests-downward-api-dj822, resource: bindings, ignored listing per whitelist
Jul  5 12:27:42.410: INFO: namespace e2e-tests-downward-api-dj822 deletion completed in 22.131780223s

• [SLOW TEST:28.891 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:27:42.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:27:42.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-sj6dg" for this suite.
Jul  5 12:27:48.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:27:48.708: INFO: namespace: e2e-tests-kubelet-test-sj6dg, resource: bindings, ignored listing per whitelist
Jul  5 12:27:48.746: INFO: namespace e2e-tests-kubelet-test-sj6dg deletion completed in 6.089955331s

• [SLOW TEST:6.335 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:27:48.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jul  5 12:27:49.370: INFO: Waiting up to 5m0s for pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-nns2n" in namespace "e2e-tests-svcaccounts-p6ftk" to be "success or failure"
Jul  5 12:27:49.384: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-nns2n": Phase="Pending", Reason="", readiness=false. Elapsed: 14.627921ms
Jul  5 12:27:51.565: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-nns2n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194927748s
Jul  5 12:27:53.697: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-nns2n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327248349s
Jul  5 12:27:55.702: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-nns2n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.332527541s
STEP: Saw pod success
Jul  5 12:27:55.702: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-nns2n" satisfied condition "success or failure"
Jul  5 12:27:55.732: INFO: Trying to get logs from node hunter-worker pod pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-nns2n container token-test: 
STEP: delete the pod
Jul  5 12:27:55.772: INFO: Waiting for pod pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-nns2n to disappear
Jul  5 12:27:55.786: INFO: Pod pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-nns2n no longer exists
STEP: Creating a pod to test consume service account root CA
Jul  5 12:27:55.790: INFO: Waiting up to 5m0s for pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-9mz5n" in namespace "e2e-tests-svcaccounts-p6ftk" to be "success or failure"
Jul  5 12:27:55.810: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-9mz5n": Phase="Pending", Reason="", readiness=false. Elapsed: 20.476114ms
Jul  5 12:27:57.814: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-9mz5n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024448567s
Jul  5 12:27:59.818: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-9mz5n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028231025s
Jul  5 12:28:01.821: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-9mz5n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031161143s
STEP: Saw pod success
Jul  5 12:28:01.821: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-9mz5n" satisfied condition "success or failure"
Jul  5 12:28:01.823: INFO: Trying to get logs from node hunter-worker pod pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-9mz5n container root-ca-test: 
STEP: delete the pod
Jul  5 12:28:02.009: INFO: Waiting for pod pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-9mz5n to disappear
Jul  5 12:28:02.065: INFO: Pod pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-9mz5n no longer exists
STEP: Creating a pod to test consume service account namespace
Jul  5 12:28:02.070: INFO: Waiting up to 5m0s for pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq" in namespace "e2e-tests-svcaccounts-p6ftk" to be "success or failure"
Jul  5 12:28:02.102: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq": Phase="Pending", Reason="", readiness=false. Elapsed: 31.962522ms
Jul  5 12:28:04.106: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036231316s
Jul  5 12:28:06.230: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160086094s
Jul  5 12:28:08.295: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225716428s
Jul  5 12:28:10.300: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.230566224s
STEP: Saw pod success
Jul  5 12:28:10.300: INFO: Pod "pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq" satisfied condition "success or failure"
Jul  5 12:28:10.303: INFO: Trying to get logs from node hunter-worker pod pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq container namespace-test: 
STEP: delete the pod
Jul  5 12:28:10.333: INFO: Waiting for pod pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq to disappear
Jul  5 12:28:10.355: INFO: Pod pod-service-account-f056bc2e-beba-11ea-9e48-0242ac110017-zkshq no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:28:10.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-p6ftk" for this suite.
Jul  5 12:28:16.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:28:16.415: INFO: namespace: e2e-tests-svcaccounts-p6ftk, resource: bindings, ignored listing per whitelist
Jul  5 12:28:16.447: INFO: namespace e2e-tests-svcaccounts-p6ftk deletion completed in 6.088619526s

• [SLOW TEST:27.700 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:28:16.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 12:28:16.556: INFO: Creating deployment "test-recreate-deployment"
Jul  5 12:28:16.577: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul  5 12:28:16.603: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jul  5 12:28:18.651: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul  5 12:28:18.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729548896, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729548896, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729548896, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729548896, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 12:28:20.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729548896, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729548896, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729548896, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729548896, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 12:28:22.657: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul  5 12:28:22.665: INFO: Updating deployment test-recreate-deployment
Jul  5 12:28:22.665: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  5 12:28:23.485: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-xpwhf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xpwhf/deployments/test-recreate-deployment,UID:008c4c46-bebb-11ea-a300-0242ac110004,ResourceVersion:238499,Generation:2,CreationTimestamp:2020-07-05 12:28:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-05 12:28:23 +0000 UTC 2020-07-05 12:28:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-05 12:28:23 +0000 UTC 2020-07-05 12:28:16 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jul  5 12:28:23.510: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-xpwhf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xpwhf/replicasets/test-recreate-deployment-589c4bfd,UID:0456d921-bebb-11ea-a300-0242ac110004,ResourceVersion:238497,Generation:1,CreationTimestamp:2020-07-05 12:28:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 008c4c46-bebb-11ea-a300-0242ac110004 0xc002230b5f 0xc002230b70}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 12:28:23.510: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul  5 12:28:23.510: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-xpwhf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xpwhf/replicasets/test-recreate-deployment-5bf7f65dc,UID:009355e5-bebb-11ea-a300-0242ac110004,ResourceVersion:238487,Generation:2,CreationTimestamp:2020-07-05 12:28:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 008c4c46-bebb-11ea-a300-0242ac110004 0xc002230d00 0xc002230d01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 12:28:23.515: INFO: Pod "test-recreate-deployment-589c4bfd-vjfl5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-vjfl5,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-xpwhf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xpwhf/pods/test-recreate-deployment-589c4bfd-vjfl5,UID:045d204b-bebb-11ea-a300-0242ac110004,ResourceVersion:238500,Generation:0,CreationTimestamp:2020-07-05 12:28:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 0456d921-bebb-11ea-a300-0242ac110004 0xc0024c028f 0xc0024c0350}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gz9c2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gz9c2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gz9c2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024c0430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024c0450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:28:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:28:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:28:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 12:28:23 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-05 12:28:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:28:23.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-xpwhf" for this suite.
Jul  5 12:28:31.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:28:31.566: INFO: namespace: e2e-tests-deployment-xpwhf, resource: bindings, ignored listing per whitelist
Jul  5 12:28:31.626: INFO: namespace e2e-tests-deployment-xpwhf deletion completed in 8.104629204s

• [SLOW TEST:15.179 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:28:31.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-5vkk
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 12:28:31.780: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5vkk" in namespace "e2e-tests-subpath-8qg5g" to be "success or failure"
Jul  5 12:28:31.793: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.65489ms
Jul  5 12:28:33.796: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015644315s
Jul  5 12:28:35.828: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047825031s
Jul  5 12:28:37.831: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 6.050953572s
Jul  5 12:28:39.836: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 8.055310402s
Jul  5 12:28:41.840: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 10.059387905s
Jul  5 12:28:43.843: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 12.063040387s
Jul  5 12:28:45.848: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 14.067360547s
Jul  5 12:28:47.852: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 16.071858253s
Jul  5 12:28:49.857: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 18.076884356s
Jul  5 12:28:51.862: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 20.081326906s
Jul  5 12:28:53.866: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 22.086020933s
Jul  5 12:28:55.871: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Running", Reason="", readiness=false. Elapsed: 24.090710807s
Jul  5 12:28:57.876: INFO: Pod "pod-subpath-test-downwardapi-5vkk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.095202212s
STEP: Saw pod success
Jul  5 12:28:57.876: INFO: Pod "pod-subpath-test-downwardapi-5vkk" satisfied condition "success or failure"
Jul  5 12:28:57.879: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-5vkk container test-container-subpath-downwardapi-5vkk: 
STEP: delete the pod
Jul  5 12:28:57.913: INFO: Waiting for pod pod-subpath-test-downwardapi-5vkk to disappear
Jul  5 12:28:57.924: INFO: Pod pod-subpath-test-downwardapi-5vkk no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5vkk
Jul  5 12:28:57.925: INFO: Deleting pod "pod-subpath-test-downwardapi-5vkk" in namespace "e2e-tests-subpath-8qg5g"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:28:57.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-8qg5g" for this suite.
Jul  5 12:29:03.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:29:03.976: INFO: namespace: e2e-tests-subpath-8qg5g, resource: bindings, ignored listing per whitelist
Jul  5 12:29:04.018: INFO: namespace e2e-tests-subpath-8qg5g deletion completed in 6.086142461s

• [SLOW TEST:32.391 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:29:04.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  5 12:29:04.140: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:29:13.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-rmwkq" for this suite.
Jul  5 12:29:19.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:29:20.022: INFO: namespace: e2e-tests-init-container-rmwkq, resource: bindings, ignored listing per whitelist
Jul  5 12:29:20.035: INFO: namespace e2e-tests-init-container-rmwkq deletion completed in 6.128523471s

• [SLOW TEST:16.017 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:29:20.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-2673f627-bebb-11ea-9e48-0242ac110017
STEP: Creating a pod to test consume secrets
Jul  5 12:29:20.172: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2675fee6-bebb-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-vnjt9" to be "success or failure"
Jul  5 12:29:20.177: INFO: Pod "pod-projected-secrets-2675fee6-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.50976ms
Jul  5 12:29:22.181: INFO: Pod "pod-projected-secrets-2675fee6-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008762474s
Jul  5 12:29:24.260: INFO: Pod "pod-projected-secrets-2675fee6-bebb-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087656815s
STEP: Saw pod success
Jul  5 12:29:24.260: INFO: Pod "pod-projected-secrets-2675fee6-bebb-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:29:24.263: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-2675fee6-bebb-11ea-9e48-0242ac110017 container projected-secret-volume-test: 
STEP: delete the pod
Jul  5 12:29:24.286: INFO: Waiting for pod pod-projected-secrets-2675fee6-bebb-11ea-9e48-0242ac110017 to disappear
Jul  5 12:29:24.290: INFO: Pod pod-projected-secrets-2675fee6-bebb-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:29:24.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vnjt9" for this suite.
Jul  5 12:29:30.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:29:30.384: INFO: namespace: e2e-tests-projected-vnjt9, resource: bindings, ignored listing per whitelist
Jul  5 12:29:30.391: INFO: namespace e2e-tests-projected-vnjt9 deletion completed in 6.098276071s

• [SLOW TEST:10.356 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:29:30.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  5 12:29:30.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xqv6v'
Jul  5 12:29:30.612: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  5 12:29:30.612: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul  5 12:29:30.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-xqv6v'
Jul  5 12:29:30.739: INFO: stderr: ""
Jul  5 12:29:30.739: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:29:30.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xqv6v" for this suite.
Jul  5 12:29:36.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:29:36.844: INFO: namespace: e2e-tests-kubectl-xqv6v, resource: bindings, ignored listing per whitelist
Jul  5 12:29:36.862: INFO: namespace e2e-tests-kubectl-xqv6v deletion completed in 6.119599631s

• [SLOW TEST:6.470 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:29:36.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  5 12:29:37.008: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul  5 12:29:37.013: INFO: Number of nodes with available pods: 0
Jul  5 12:29:37.013: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul  5 12:29:37.083: INFO: Number of nodes with available pods: 0
Jul  5 12:29:37.083: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:38.088: INFO: Number of nodes with available pods: 0
Jul  5 12:29:38.088: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:39.087: INFO: Number of nodes with available pods: 0
Jul  5 12:29:39.087: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:40.957: INFO: Number of nodes with available pods: 1
Jul  5 12:29:40.957: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul  5 12:29:42.202: INFO: Number of nodes with available pods: 1
Jul  5 12:29:42.202: INFO: Number of running nodes: 0, number of available pods: 1
Jul  5 12:29:43.207: INFO: Number of nodes with available pods: 0
Jul  5 12:29:43.207: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul  5 12:29:43.220: INFO: Number of nodes with available pods: 0
Jul  5 12:29:43.220: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:44.260: INFO: Number of nodes with available pods: 0
Jul  5 12:29:44.260: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:45.224: INFO: Number of nodes with available pods: 0
Jul  5 12:29:45.224: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:46.291: INFO: Number of nodes with available pods: 0
Jul  5 12:29:46.291: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:47.224: INFO: Number of nodes with available pods: 0
Jul  5 12:29:47.224: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:48.224: INFO: Number of nodes with available pods: 0
Jul  5 12:29:48.224: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:49.224: INFO: Number of nodes with available pods: 0
Jul  5 12:29:49.224: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:50.260: INFO: Number of nodes with available pods: 0
Jul  5 12:29:50.261: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:51.224: INFO: Number of nodes with available pods: 0
Jul  5 12:29:51.224: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:52.224: INFO: Number of nodes with available pods: 0
Jul  5 12:29:52.224: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:53.224: INFO: Number of nodes with available pods: 0
Jul  5 12:29:53.224: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:54.302: INFO: Number of nodes with available pods: 0
Jul  5 12:29:54.302: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:55.224: INFO: Number of nodes with available pods: 0
Jul  5 12:29:55.224: INFO: Node hunter-worker is running more than one daemon pod
Jul  5 12:29:56.248: INFO: Number of nodes with available pods: 1
Jul  5 12:29:56.248: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kmfwr, will wait for the garbage collector to delete the pods
Jul  5 12:29:56.311: INFO: Deleting DaemonSet.extensions daemon-set took: 5.908494ms
Jul  5 12:29:56.411: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.243974ms
Jul  5 12:30:03.815: INFO: Number of nodes with available pods: 0
Jul  5 12:30:03.815: INFO: Number of running nodes: 0, number of available pods: 0
Jul  5 12:30:03.818: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kmfwr/daemonsets","resourceVersion":"238898"},"items":null}

Jul  5 12:30:03.820: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kmfwr/pods","resourceVersion":"238898"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:30:03.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-kmfwr" for this suite.
Jul  5 12:30:09.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:30:09.943: INFO: namespace: e2e-tests-daemonsets-kmfwr, resource: bindings, ignored listing per whitelist
Jul  5 12:30:10.007: INFO: namespace e2e-tests-daemonsets-kmfwr deletion completed in 6.125967323s

• [SLOW TEST:33.146 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:30:10.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-4w52t
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-4w52t
STEP: Deleting pre-stop pod
Jul  5 12:30:23.145: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:30:23.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-4w52t" for this suite.
Jul  5 12:31:01.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:31:01.306: INFO: namespace: e2e-tests-prestop-4w52t, resource: bindings, ignored listing per whitelist
Jul  5 12:31:01.316: INFO: namespace e2e-tests-prestop-4w52t deletion completed in 38.10220241s

• [SLOW TEST:51.308 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:31:01.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jul  5 12:31:01.420: INFO: Waiting up to 5m0s for pod "var-expansion-62ce8d78-bebb-11ea-9e48-0242ac110017" in namespace "e2e-tests-var-expansion-hj2ch" to be "success or failure"
Jul  5 12:31:01.424: INFO: Pod "var-expansion-62ce8d78-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.954599ms
Jul  5 12:31:03.428: INFO: Pod "var-expansion-62ce8d78-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008147126s
Jul  5 12:31:05.432: INFO: Pod "var-expansion-62ce8d78-bebb-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012008179s
STEP: Saw pod success
Jul  5 12:31:05.432: INFO: Pod "var-expansion-62ce8d78-bebb-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:31:05.435: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-62ce8d78-bebb-11ea-9e48-0242ac110017 container dapi-container: 
STEP: delete the pod
Jul  5 12:31:05.456: INFO: Waiting for pod var-expansion-62ce8d78-bebb-11ea-9e48-0242ac110017 to disappear
Jul  5 12:31:05.460: INFO: Pod var-expansion-62ce8d78-bebb-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:31:05.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-hj2ch" for this suite.
Jul  5 12:31:11.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:31:11.544: INFO: namespace: e2e-tests-var-expansion-hj2ch, resource: bindings, ignored listing per whitelist
Jul  5 12:31:11.570: INFO: namespace e2e-tests-var-expansion-hj2ch deletion completed in 6.107034483s

• [SLOW TEST:10.254 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:31:11.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-f5p66
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-f5p66
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-f5p66
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-f5p66
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-f5p66
Jul  5 12:31:17.740: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-f5p66, name: ss-0, uid: 6c781409-bebb-11ea-a300-0242ac110004, status phase: Pending. Waiting for statefulset controller to delete.
Jul  5 12:31:18.172: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-f5p66, name: ss-0, uid: 6c781409-bebb-11ea-a300-0242ac110004, status phase: Failed. Waiting for statefulset controller to delete.
Jul  5 12:31:18.210: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-f5p66, name: ss-0, uid: 6c781409-bebb-11ea-a300-0242ac110004, status phase: Failed. Waiting for statefulset controller to delete.
Jul  5 12:31:18.228: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-f5p66
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-f5p66
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-f5p66 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul  5 12:31:22.331: INFO: Deleting all statefulset in ns e2e-tests-statefulset-f5p66
Jul  5 12:31:22.335: INFO: Scaling statefulset ss to 0
Jul  5 12:31:32.350: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 12:31:32.353: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:31:32.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-f5p66" for this suite.
Jul  5 12:31:38.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:31:38.398: INFO: namespace: e2e-tests-statefulset-f5p66, resource: bindings, ignored listing per whitelist
Jul  5 12:31:38.454: INFO: namespace e2e-tests-statefulset-f5p66 deletion completed in 6.083106003s

• [SLOW TEST:26.884 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:31:38.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-78ff08f9-bebb-11ea-9e48-0242ac110017
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-78ff08f9-bebb-11ea-9e48-0242ac110017
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:31:44.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2gbzw" for this suite.
Jul  5 12:32:06.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:32:06.748: INFO: namespace: e2e-tests-configmap-2gbzw, resource: bindings, ignored listing per whitelist
Jul  5 12:32:06.807: INFO: namespace e2e-tests-configmap-2gbzw deletion completed in 22.115587912s

• [SLOW TEST:28.353 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:32:06.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul  5 12:32:06.928: INFO: Pod name pod-release: Found 0 pods out of 1
Jul  5 12:32:11.932: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:32:12.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-ptr4z" for this suite.
Jul  5 12:32:18.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:32:19.028: INFO: namespace: e2e-tests-replication-controller-ptr4z, resource: bindings, ignored listing per whitelist
Jul  5 12:32:19.058: INFO: namespace e2e-tests-replication-controller-ptr4z deletion completed in 6.087980554s

• [SLOW TEST:12.251 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:32:19.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  5 12:32:19.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-9g65l'
Jul  5 12:32:19.361: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  5 12:32:19.361: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jul  5 12:32:21.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9g65l'
Jul  5 12:32:21.666: INFO: stderr: ""
Jul  5 12:32:21.666: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:32:21.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9g65l" for this suite.
Jul  5 12:32:43.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:32:43.769: INFO: namespace: e2e-tests-kubectl-9g65l, resource: bindings, ignored listing per whitelist
Jul  5 12:32:43.783: INFO: namespace e2e-tests-kubectl-9g65l deletion completed in 22.093412736s

• [SLOW TEST:24.724 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:32:43.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jul  5 12:32:43.881: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jul  5 12:32:43.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:44.149: INFO: stderr: ""
Jul  5 12:32:44.149: INFO: stdout: "service/redis-slave created\n"
Jul  5 12:32:44.149: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jul  5 12:32:44.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:44.415: INFO: stderr: ""
Jul  5 12:32:44.415: INFO: stdout: "service/redis-master created\n"
Jul  5 12:32:44.415: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul  5 12:32:44.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:44.753: INFO: stderr: ""
Jul  5 12:32:44.753: INFO: stdout: "service/frontend created\n"
Jul  5 12:32:44.753: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jul  5 12:32:44.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:45.020: INFO: stderr: ""
Jul  5 12:32:45.020: INFO: stdout: "deployment.extensions/frontend created\n"
Jul  5 12:32:45.020: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul  5 12:32:45.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:45.354: INFO: stderr: ""
Jul  5 12:32:45.354: INFO: stdout: "deployment.extensions/redis-master created\n"
Jul  5 12:32:45.354: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jul  5 12:32:45.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:45.636: INFO: stderr: ""
Jul  5 12:32:45.636: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jul  5 12:32:45.636: INFO: Waiting for all frontend pods to be Running.
Jul  5 12:32:55.687: INFO: Waiting for frontend to serve content.
Jul  5 12:32:55.712: INFO: Trying to add a new entry to the guestbook.
Jul  5 12:32:55.732: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul  5 12:32:55.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:55.925: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 12:32:55.925: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul  5 12:32:55.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:56.070: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 12:32:56.071: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  5 12:32:56.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:56.232: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 12:32:56.233: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  5 12:32:56.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:56.347: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 12:32:56.347: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  5 12:32:56.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:56.472: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 12:32:56.472: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  5 12:32:56.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sqf46'
Jul  5 12:32:56.611: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 12:32:56.611: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:32:56.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sqf46" for this suite.
Jul  5 12:33:35.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:33:35.091: INFO: namespace: e2e-tests-kubectl-sqf46, resource: bindings, ignored listing per whitelist
Jul  5 12:33:35.158: INFO: namespace e2e-tests-kubectl-sqf46 deletion completed in 38.399132375s

• [SLOW TEST:51.374 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:33:35.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  5 12:33:35.273: INFO: Waiting up to 5m0s for pod "pod-be80e038-bebb-11ea-9e48-0242ac110017" in namespace "e2e-tests-emptydir-grms5" to be "success or failure"
Jul  5 12:33:35.279: INFO: Pod "pod-be80e038-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.312901ms
Jul  5 12:33:37.303: INFO: Pod "pod-be80e038-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029185715s
Jul  5 12:33:39.314: INFO: Pod "pod-be80e038-bebb-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04018715s
STEP: Saw pod success
Jul  5 12:33:39.314: INFO: Pod "pod-be80e038-bebb-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:33:39.316: INFO: Trying to get logs from node hunter-worker pod pod-be80e038-bebb-11ea-9e48-0242ac110017 container test-container: 
STEP: delete the pod
Jul  5 12:33:39.351: INFO: Waiting for pod pod-be80e038-bebb-11ea-9e48-0242ac110017 to disappear
Jul  5 12:33:39.430: INFO: Pod pod-be80e038-bebb-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:33:39.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-grms5" for this suite.
Jul  5 12:33:45.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:33:45.458: INFO: namespace: e2e-tests-emptydir-grms5, resource: bindings, ignored listing per whitelist
Jul  5 12:33:45.526: INFO: namespace e2e-tests-emptydir-grms5 deletion completed in 6.092100431s

• [SLOW TEST:10.368 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:33:45.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  5 12:33:45.658: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4b43d41-bebb-11ea-9e48-0242ac110017" in namespace "e2e-tests-projected-gvvgd" to be "success or failure"
Jul  5 12:33:45.662: INFO: Pod "downwardapi-volume-c4b43d41-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356507ms
Jul  5 12:33:47.667: INFO: Pod "downwardapi-volume-c4b43d41-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008756769s
Jul  5 12:33:49.671: INFO: Pod "downwardapi-volume-c4b43d41-bebb-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012862933s
STEP: Saw pod success
Jul  5 12:33:49.671: INFO: Pod "downwardapi-volume-c4b43d41-bebb-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:33:49.674: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c4b43d41-bebb-11ea-9e48-0242ac110017 container client-container: 
STEP: delete the pod
Jul  5 12:33:49.833: INFO: Waiting for pod downwardapi-volume-c4b43d41-bebb-11ea-9e48-0242ac110017 to disappear
Jul  5 12:33:49.890: INFO: Pod downwardapi-volume-c4b43d41-bebb-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:33:49.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gvvgd" for this suite.
Jul  5 12:33:55.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:33:55.983: INFO: namespace: e2e-tests-projected-gvvgd, resource: bindings, ignored listing per whitelist
Jul  5 12:33:56.034: INFO: namespace e2e-tests-projected-gvvgd deletion completed in 6.119438715s

• [SLOW TEST:10.508 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:33:56.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  5 12:33:56.179: INFO: Waiting up to 5m0s for pod "downwardapi-volume-caf998da-bebb-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-2l6vh" to be "success or failure"
Jul  5 12:33:56.189: INFO: Pod "downwardapi-volume-caf998da-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.962768ms
Jul  5 12:33:58.194: INFO: Pod "downwardapi-volume-caf998da-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014765855s
Jul  5 12:34:00.198: INFO: Pod "downwardapi-volume-caf998da-bebb-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018327211s
STEP: Saw pod success
Jul  5 12:34:00.198: INFO: Pod "downwardapi-volume-caf998da-bebb-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:34:00.200: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-caf998da-bebb-11ea-9e48-0242ac110017 container client-container: 
STEP: delete the pod
Jul  5 12:34:00.214: INFO: Waiting for pod downwardapi-volume-caf998da-bebb-11ea-9e48-0242ac110017 to disappear
Jul  5 12:34:00.244: INFO: Pod downwardapi-volume-caf998da-bebb-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:34:00.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2l6vh" for this suite.
Jul  5 12:34:06.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:34:06.329: INFO: namespace: e2e-tests-downward-api-2l6vh, resource: bindings, ignored listing per whitelist
Jul  5 12:34:06.379: INFO: namespace e2e-tests-downward-api-2l6vh deletion completed in 6.131170033s

• [SLOW TEST:10.344 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 12:34:06.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul  5 12:34:06.491: INFO: Waiting up to 5m0s for pod "downward-api-d11df366-bebb-11ea-9e48-0242ac110017" in namespace "e2e-tests-downward-api-7hbp6" to be "success or failure"
Jul  5 12:34:06.505: INFO: Pod "downward-api-d11df366-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.887717ms
Jul  5 12:34:08.664: INFO: Pod "downward-api-d11df366-bebb-11ea-9e48-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172610029s
Jul  5 12:34:10.668: INFO: Pod "downward-api-d11df366-bebb-11ea-9e48-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.176945823s
STEP: Saw pod success
Jul  5 12:34:10.668: INFO: Pod "downward-api-d11df366-bebb-11ea-9e48-0242ac110017" satisfied condition "success or failure"
Jul  5 12:34:10.672: INFO: Trying to get logs from node hunter-worker2 pod downward-api-d11df366-bebb-11ea-9e48-0242ac110017 container dapi-container: 
STEP: delete the pod
Jul  5 12:34:10.690: INFO: Waiting for pod downward-api-d11df366-bebb-11ea-9e48-0242ac110017 to disappear
Jul  5 12:34:10.723: INFO: Pod downward-api-d11df366-bebb-11ea-9e48-0242ac110017 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  5 12:34:10.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7hbp6" for this suite.
Jul  5 12:34:16.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 12:34:16.883: INFO: namespace: e2e-tests-downward-api-7hbp6, resource: bindings, ignored listing per whitelist
Jul  5 12:34:16.890: INFO: namespace e2e-tests-downward-api-7hbp6 deletion completed in 6.162191526s

• [SLOW TEST:10.510 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
Jul  5 12:34:16.890: INFO: Running AfterSuite actions on all nodes
Jul  5 12:34:16.890: INFO: Running AfterSuite actions on node 1
Jul  5 12:34:16.890: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6449.108 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS