I0519 10:46:43.873086 6 e2e.go:224] Starting e2e run "0748d997-99be-11ea-abcb-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589885203 - Will randomize all specs Will run 201 of 2164 specs May 19 10:46:44.055: INFO: >>> kubeConfig: /root/.kube/config May 19 10:46:44.058: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 19 10:46:44.073: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 19 10:46:44.107: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 19 10:46:44.107: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 19 10:46:44.107: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 19 10:46:44.114: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 19 10:46:44.114: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 19 10:46:44.114: INFO: e2e test version: v1.13.12 May 19 10:46:44.115: INFO: kube-apiserver version: v1.13.12 [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 10:46:44.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset May 19 10:46:44.255: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-hmgc5 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 19 10:46:44.291: INFO: Found 0 stateful pods, waiting for 3 May 19 10:46:54.296: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 10:46:54.296: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 10:46:54.296: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 19 10:47:04.297: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 10:47:04.297: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 10:47:04.297: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 19 10:47:04.322: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 19 10:47:14.407: INFO: Updating stateful set ss2 May 19 10:47:14.414: INFO: Waiting for Pod e2e-tests-statefulset-hmgc5/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 19 10:47:24.562: INFO: Found 2 stateful pods, waiting for 3 May 19 10:47:34.566: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 10:47:34.567: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 10:47:34.567: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 19 10:47:34.590: INFO: Updating stateful set ss2 May 19 10:47:34.610: INFO: Waiting for Pod e2e-tests-statefulset-hmgc5/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 10:47:44.632: INFO: Updating stateful set ss2 May 19 10:47:44.674: INFO: Waiting for StatefulSet e2e-tests-statefulset-hmgc5/ss2 to complete update May 19 10:47:44.674: INFO: Waiting for Pod e2e-tests-statefulset-hmgc5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 10:47:54.714: INFO: Waiting for StatefulSet e2e-tests-statefulset-hmgc5/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 19 10:48:04.682: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hmgc5 May 19 10:48:04.685: INFO: Scaling statefulset ss2 to 0 May 19 10:48:34.732: INFO: Waiting for statefulset status.replicas updated to 0 May 19 10:48:34.736: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 10:48:34.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-hmgc5" for this suite. May 19 10:48:40.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 10:48:40.870: INFO: namespace: e2e-tests-statefulset-hmgc5, resource: bindings, ignored listing per whitelist May 19 10:48:40.870: INFO: namespace e2e-tests-statefulset-hmgc5 deletion completed in 6.110407804s • [SLOW TEST:116.755 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 10:48:40.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2kpbb [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2kpbb STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2kpbb May 19 10:48:40.988: INFO: Found 0 stateful pods, waiting for 1 May 19 10:48:50.993: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 19 10:48:50.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2kpbb ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 10:48:51.281: INFO: stderr: "I0519 10:48:51.143117 38 log.go:172] (0xc000138790) (0xc000738640) Create stream\nI0519 10:48:51.143184 38 log.go:172] (0xc000138790) (0xc000738640) Stream added, broadcasting: 1\nI0519 10:48:51.146638 38 log.go:172] (0xc000138790) Reply frame received for 1\nI0519 10:48:51.146700 38 log.go:172] (0xc000138790) (0xc00065ce60) Create stream\nI0519 10:48:51.146734 38 log.go:172] (0xc000138790) (0xc00065ce60) Stream added, broadcasting: 3\nI0519 10:48:51.147924 38 log.go:172] (0xc000138790) Reply frame received for 3\nI0519 10:48:51.147979 38 log.go:172] (0xc000138790) (0xc0007386e0) Create stream\nI0519 10:48:51.147994 38 log.go:172] (0xc000138790) (0xc0007386e0) Stream added, broadcasting: 5\nI0519 10:48:51.149252 38 log.go:172] (0xc000138790) Reply frame received for 5\nI0519 10:48:51.273016 38 log.go:172] (0xc000138790) Data frame received for 3\nI0519 10:48:51.273071 38 log.go:172] (0xc00065ce60) (3) Data frame handling\nI0519 10:48:51.273100 38 log.go:172] (0xc00065ce60) (3) Data frame sent\nI0519 10:48:51.273312 38 log.go:172] (0xc000138790) Data frame received for 3\nI0519 10:48:51.273341 38 log.go:172] (0xc00065ce60) (3) Data frame handling\nI0519 10:48:51.273388 38 log.go:172] (0xc000138790) Data frame received for 5\nI0519 10:48:51.273416 38 log.go:172] (0xc0007386e0) (5) Data frame handling\nI0519 10:48:51.275250 38 log.go:172] (0xc000138790) Data frame received for 1\nI0519 10:48:51.275288 38 log.go:172] (0xc000738640) (1) Data frame handling\nI0519 10:48:51.275308 38 log.go:172] (0xc000738640) (1) Data frame sent\nI0519 10:48:51.275347 38 log.go:172] (0xc000138790) (0xc000738640) Stream removed, broadcasting: 1\nI0519 10:48:51.275397 38 log.go:172] (0xc000138790) Go away received\nI0519 10:48:51.275629 38 log.go:172] (0xc000138790) (0xc000738640) Stream removed, broadcasting: 1\nI0519 10:48:51.275660 38 log.go:172] (0xc000138790) (0xc00065ce60) Stream removed, broadcasting: 3\nI0519 10:48:51.275676 38 log.go:172] (0xc000138790) (0xc0007386e0) Stream removed, broadcasting: 5\n" May 19 10:48:51.282: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 10:48:51.282: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 10:48:51.285: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 19 10:49:01.290: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 10:49:01.290: INFO: Waiting for statefulset status.replicas updated to 0 May 19 10:49:01.364: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999783s May 19 10:49:02.370: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.934406989s May 19 10:49:03.375: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.928968687s May 19 10:49:04.380: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.92368923s May 19 10:49:05.386: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.917207875s May 19 10:49:06.391: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.912619082s May 19 10:49:07.423: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.907476164s May 19 10:49:08.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.875848591s May 19 10:49:09.446: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.857679031s May 19 10:49:10.450: INFO: Verifying statefulset ss doesn't scale past 1 for another 853.067723ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2kpbb May 19 10:49:11.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2kpbb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 10:49:11.679: INFO: stderr: "I0519 10:49:11.593943 59 log.go:172] (0xc00015c840) (0xc000665540) Create stream\nI0519 10:49:11.594002 59 log.go:172] (0xc00015c840) (0xc000665540) Stream added, broadcasting: 1\nI0519 10:49:11.596650 59 log.go:172] (0xc00015c840) Reply frame received for 1\nI0519 10:49:11.596708 59 log.go:172] (0xc00015c840) (0xc0006cc000) Create stream\nI0519 10:49:11.596723 59 log.go:172] (0xc00015c840) (0xc0006cc000) Stream added, broadcasting: 3\nI0519 10:49:11.598033 59 log.go:172] (0xc00015c840) Reply frame received for 3\nI0519 10:49:11.598067 59 log.go:172] (0xc00015c840) (0xc0006655e0) Create stream\nI0519 10:49:11.598078 59 log.go:172] (0xc00015c840) (0xc0006655e0) Stream added, broadcasting: 5\nI0519 10:49:11.599155 59 log.go:172] (0xc00015c840) Reply frame received for 5\nI0519 10:49:11.673382 59 log.go:172] (0xc00015c840) Data frame received for 3\nI0519 10:49:11.673431 59 log.go:172] (0xc0006cc000) (3) Data frame handling\nI0519 10:49:11.673450 59 log.go:172] (0xc0006cc000) (3) Data frame sent\nI0519 10:49:11.673460 59 log.go:172] (0xc00015c840) Data frame received for 3\nI0519 10:49:11.673467 59 log.go:172] (0xc0006cc000) (3) Data frame handling\nI0519 10:49:11.673501 59 log.go:172] (0xc00015c840) Data frame received for 5\nI0519 10:49:11.673523 59 log.go:172] (0xc0006655e0) (5) Data frame handling\nI0519 10:49:11.674951 59 log.go:172] (0xc00015c840) Data frame received for 1\nI0519 10:49:11.674977 59 log.go:172] (0xc000665540) (1) Data frame handling\nI0519 10:49:11.674989 59 log.go:172] (0xc000665540) (1) Data frame sent\nI0519 10:49:11.674999 59 log.go:172] (0xc00015c840) (0xc000665540) Stream removed, broadcasting: 1\nI0519 10:49:11.675020 59 log.go:172] (0xc00015c840) Go away received\nI0519 10:49:11.675230 59 log.go:172] (0xc00015c840) (0xc000665540) Stream removed, broadcasting: 1\nI0519 10:49:11.675247 59 log.go:172] (0xc00015c840) (0xc0006cc000) Stream removed, broadcasting: 3\nI0519 10:49:11.675253 59 log.go:172] (0xc00015c840) (0xc0006655e0) Stream removed, broadcasting: 5\n" May 19 10:49:11.679: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 10:49:11.679: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 10:49:11.683: INFO: Found 1 stateful pods, waiting for 3 May 19 10:49:21.688: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 19 10:49:21.688: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 19 10:49:21.688: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 19 10:49:21.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2kpbb ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 10:49:21.931: INFO: stderr: "I0519 10:49:21.825034 81 log.go:172] (0xc0008102c0) (0xc000722000) Create stream\nI0519 10:49:21.825088 81 log.go:172] (0xc0008102c0) (0xc000722000) Stream added, broadcasting: 1\nI0519 10:49:21.827610 81 log.go:172] (0xc0008102c0) Reply frame received for 1\nI0519 10:49:21.827677 81 log.go:172] (0xc0008102c0) (0xc0007bcc80) Create stream\nI0519 10:49:21.827698 81 log.go:172] (0xc0008102c0) (0xc0007bcc80) Stream added, broadcasting: 3\nI0519 10:49:21.828620 81 log.go:172] (0xc0008102c0) Reply frame received for 3\nI0519 10:49:21.828645 81 log.go:172] (0xc0008102c0) (0xc0005a6000) Create stream\nI0519 10:49:21.828652 81 log.go:172] (0xc0008102c0) (0xc0005a6000) Stream added, broadcasting: 5\nI0519 10:49:21.829597 81 log.go:172] (0xc0008102c0) Reply frame received for 5\nI0519 10:49:21.923829 81 log.go:172] (0xc0008102c0) Data frame received for 5\nI0519 10:49:21.923856 81 log.go:172] (0xc0005a6000) (5) Data frame handling\nI0519 10:49:21.923911 81 log.go:172] (0xc0008102c0) Data frame received for 3\nI0519 10:49:21.923943 81 log.go:172] (0xc0007bcc80) (3) Data frame handling\nI0519 10:49:21.923966 81 log.go:172] (0xc0007bcc80) (3) Data frame sent\nI0519 10:49:21.923981 81 log.go:172] (0xc0008102c0) Data frame received for 3\nI0519 10:49:21.923994 81 log.go:172] (0xc0007bcc80) (3) Data frame handling\nI0519 10:49:21.925531 81 log.go:172] (0xc0008102c0) Data frame received for 1\nI0519 10:49:21.925556 81 log.go:172] (0xc000722000) (1) Data frame handling\nI0519 10:49:21.925585 81 log.go:172] (0xc000722000) (1) Data frame sent\nI0519 10:49:21.925599 81 log.go:172] (0xc0008102c0) (0xc000722000) Stream removed, broadcasting: 1\nI0519 10:49:21.925624 81 log.go:172] (0xc0008102c0) Go away received\nI0519 10:49:21.925815 81 log.go:172] (0xc0008102c0) (0xc000722000) Stream removed, broadcasting: 1\nI0519 10:49:21.925832 81 log.go:172] (0xc0008102c0) (0xc0007bcc80) Stream removed, broadcasting: 3\nI0519 10:49:21.925843 81 log.go:172] (0xc0008102c0) (0xc0005a6000) Stream removed, broadcasting: 5\n" May 19 10:49:21.931: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 10:49:21.931: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 10:49:21.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2kpbb ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 10:49:22.189: INFO: stderr: "I0519 10:49:22.057628 104 log.go:172] (0xc0001380b0) (0xc0006b6000) Create stream\nI0519 10:49:22.057712 104 log.go:172] (0xc0001380b0) (0xc0006b6000) Stream added, broadcasting: 1\nI0519 10:49:22.060899 104 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0519 10:49:22.060948 104 log.go:172] (0xc0001380b0) (0xc000626be0) Create stream\nI0519 10:49:22.060965 104 log.go:172] (0xc0001380b0) (0xc000626be0) Stream added, broadcasting: 3\nI0519 10:49:22.062258 104 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0519 10:49:22.062307 104 log.go:172] (0xc0001380b0) (0xc000626d20) Create stream\nI0519 10:49:22.062332 104 log.go:172] (0xc0001380b0) (0xc000626d20) Stream added, broadcasting: 5\nI0519 10:49:22.063371 104 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0519 10:49:22.181851 104 log.go:172] (0xc0001380b0) Data frame received for 5\nI0519 10:49:22.181912 104 log.go:172] (0xc000626d20) (5) Data frame handling\nI0519 10:49:22.181954 104 log.go:172] (0xc0001380b0) Data frame received for 3\nI0519 10:49:22.181966 104 log.go:172] (0xc000626be0) (3) Data frame handling\nI0519 10:49:22.181980 104 log.go:172] (0xc000626be0) (3) Data frame sent\nI0519 10:49:22.181993 104 log.go:172] (0xc0001380b0) Data frame received for 3\nI0519 10:49:22.182003 104 log.go:172] (0xc000626be0) (3) Data frame handling\nI0519 10:49:22.184007 104 log.go:172] (0xc0001380b0) Data frame received for 1\nI0519 10:49:22.184052 104 log.go:172] (0xc0006b6000) (1) Data frame handling\nI0519 10:49:22.184110 104 log.go:172] (0xc0006b6000) (1) Data frame sent\nI0519 10:49:22.184165 104 log.go:172] (0xc0001380b0) (0xc0006b6000) Stream removed, broadcasting: 1\nI0519 10:49:22.184200 104 log.go:172] (0xc0001380b0) Go away received\nI0519 10:49:22.184457 104 log.go:172] (0xc0001380b0) (0xc0006b6000) Stream removed, broadcasting: 1\nI0519 10:49:22.184488 104 log.go:172] (0xc0001380b0) (0xc000626be0) Stream removed, broadcasting: 3\nI0519 10:49:22.184504 104 log.go:172] (0xc0001380b0) (0xc000626d20) Stream removed, broadcasting: 5\n" May 19 10:49:22.189: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 10:49:22.189: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 10:49:22.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2kpbb ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 10:49:22.474: INFO: stderr: "I0519 10:49:22.326014 126 log.go:172] (0xc00076e160) (0xc0005ce280) Create stream\nI0519 10:49:22.326074 126 log.go:172] (0xc00076e160) (0xc0005ce280) Stream added, broadcasting: 1\nI0519 10:49:22.328209 126 log.go:172] (0xc00076e160) Reply frame received for 1\nI0519 10:49:22.328248 126 log.go:172] (0xc00076e160) (0xc000884500) Create stream\nI0519 10:49:22.328258 126 log.go:172] (0xc00076e160) (0xc000884500) Stream added, broadcasting: 3\nI0519 10:49:22.329078 126 log.go:172] (0xc00076e160) Reply frame received for 3\nI0519 10:49:22.329103 126 log.go:172] (0xc00076e160) (0xc0001aaaa0) Create stream\nI0519 10:49:22.329289 126 log.go:172] (0xc00076e160) (0xc0001aaaa0) Stream added, broadcasting: 5\nI0519 10:49:22.330259 126 log.go:172] (0xc00076e160) Reply frame received for 5\nI0519 10:49:22.465867 126 log.go:172] (0xc00076e160) Data frame received for 3\nI0519 10:49:22.465910 126 log.go:172] (0xc000884500) (3) Data frame handling\nI0519 10:49:22.465943 126 log.go:172] (0xc000884500) (3) Data frame sent\nI0519 10:49:22.466054 126 log.go:172] (0xc00076e160) Data frame received for 3\nI0519 10:49:22.466084 126 log.go:172] (0xc000884500) (3) Data frame handling\nI0519 10:49:22.466381 126 log.go:172] (0xc00076e160) Data frame received for 5\nI0519 10:49:22.466399 126 log.go:172] (0xc0001aaaa0) (5) Data frame handling\nI0519 10:49:22.468071 126 log.go:172] (0xc00076e160) Data frame received for 1\nI0519 10:49:22.468091 126 log.go:172] (0xc0005ce280) (1) Data frame handling\nI0519 10:49:22.468107 126 log.go:172] (0xc0005ce280) (1) Data frame sent\nI0519 10:49:22.468122 126 log.go:172] (0xc00076e160) (0xc0005ce280) Stream removed, broadcasting: 1\nI0519 10:49:22.468142 126 log.go:172] (0xc00076e160) Go away received\nI0519 10:49:22.468309 126 log.go:172] (0xc00076e160) (0xc0005ce280) Stream removed, broadcasting: 1\nI0519 10:49:22.468332 126 log.go:172] (0xc00076e160) (0xc000884500) Stream removed, broadcasting: 3\nI0519 10:49:22.468347 126 log.go:172] (0xc00076e160) (0xc0001aaaa0) Stream removed, broadcasting: 5\n" May 19 10:49:22.474: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 10:49:22.474: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 10:49:22.474: INFO: Waiting for statefulset status.replicas updated to 0 May 19 10:49:22.477: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 19 10:49:32.486: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 10:49:32.486: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 19 10:49:32.486: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 19 10:49:32.498: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999633s May 19 10:49:33.503: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994954291s May 19 10:49:34.671: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990064932s May 19 10:49:35.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.822487583s May 19 10:49:36.682: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.817019588s May 19 10:49:37.687: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.81142734s May 19 10:49:38.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.806155994s May 19 10:49:39.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.80036694s May 19 10:49:40.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.794786866s May 19 10:49:41.710: INFO: Verifying statefulset ss doesn't scale past 3 for another 789.154716ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2kpbb May 19 10:49:42.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2kpbb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 10:49:42.925: INFO: stderr: "I0519 10:49:42.853926 149 log.go:172] (0xc000138fd0) (0xc000457680) Create stream\nI0519 10:49:42.853989 149 log.go:172] (0xc000138fd0) (0xc000457680) Stream added, broadcasting: 1\nI0519 10:49:42.858341 149 log.go:172] (0xc000138fd0) Reply frame received for 1\nI0519 10:49:42.858389 149 log.go:172] (0xc000138fd0) (0xc0005ec000) Create stream\nI0519 10:49:42.858404 149 log.go:172] (0xc000138fd0) (0xc0005ec000) Stream added, broadcasting: 3\nI0519 10:49:42.859227 149 log.go:172] (0xc000138fd0) Reply frame received for 3\nI0519 10:49:42.859269 149 log.go:172] (0xc000138fd0) (0xc0005ec140) Create stream\nI0519 10:49:42.859279 149 log.go:172] (0xc000138fd0) (0xc0005ec140) Stream added, broadcasting: 5\nI0519 10:49:42.860016 149 log.go:172] (0xc000138fd0) Reply frame received for 5\nI0519 10:49:42.917481 149 log.go:172] (0xc000138fd0) Data frame received for 3\nI0519 10:49:42.917512 149 log.go:172] (0xc0005ec000) (3) Data frame handling\nI0519 10:49:42.917527 149 log.go:172] (0xc0005ec000) (3) Data frame sent\nI0519 10:49:42.917549 149 log.go:172] (0xc000138fd0) Data frame received for 5\nI0519 10:49:42.917593 149 log.go:172] (0xc0005ec140) (5) Data frame handling\nI0519 10:49:42.917619 149 log.go:172] (0xc000138fd0) Data frame received for 3\nI0519 10:49:42.917630 149 log.go:172] (0xc0005ec000) (3) Data frame handling\nI0519 10:49:42.919366 149 log.go:172] (0xc000138fd0) Data frame received for 1\nI0519 10:49:42.919389 149 log.go:172] (0xc000457680) (1) Data frame handling\nI0519 10:49:42.919403 149 log.go:172] (0xc000457680) (1) Data frame sent\nI0519 10:49:42.919427 149 log.go:172] (0xc000138fd0) (0xc000457680) Stream removed, broadcasting: 1\nI0519 10:49:42.919498 149 log.go:172] (0xc000138fd0) Go away received\nI0519 10:49:42.919721 149 log.go:172] (0xc000138fd0) (0xc000457680) Stream removed, broadcasting: 1\nI0519 10:49:42.919747 149 log.go:172] (0xc000138fd0) (0xc0005ec000) Stream removed, broadcasting: 3\nI0519 10:49:42.919770 149 log.go:172] (0xc000138fd0) (0xc0005ec140) Stream removed, broadcasting: 5\n" May 19 10:49:42.925: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 10:49:42.925: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 10:49:42.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2kpbb ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 10:49:43.131: INFO: stderr: "I0519 10:49:43.057813 172 log.go:172] (0xc000138580) (0xc0005f3400) Create stream\nI0519 10:49:43.057899 172 log.go:172] (0xc000138580) (0xc0005f3400) Stream added, broadcasting: 1\nI0519 10:49:43.060824 172 log.go:172] (0xc000138580) Reply frame received for 1\nI0519 10:49:43.060875 172 log.go:172] (0xc000138580) (0xc0006a6000) Create stream\nI0519 10:49:43.060890 172 log.go:172] (0xc000138580) (0xc0006a6000) Stream added, broadcasting: 3\nI0519 10:49:43.062040 172 log.go:172] (0xc000138580) Reply frame received for 3\nI0519 10:49:43.062075 172 log.go:172] (0xc000138580) (0xc0006a60a0) Create stream\nI0519 10:49:43.062089 172 log.go:172] (0xc000138580) (0xc0006a60a0) Stream added, broadcasting: 5\nI0519 10:49:43.063264 172 log.go:172] (0xc000138580) Reply frame received for 5\nI0519 10:49:43.123574 172 log.go:172] (0xc000138580) Data frame received for 3\nI0519 10:49:43.123613 172 log.go:172] (0xc0006a6000) (3) Data frame handling\nI0519 10:49:43.123630 172 log.go:172] (0xc0006a6000) (3) Data frame sent\nI0519 10:49:43.123643 172 log.go:172] (0xc000138580) Data frame received for 5\nI0519 10:49:43.123662 172 log.go:172] (0xc0006a60a0) (5) Data frame handling\nI0519 10:49:43.123710 172 log.go:172] (0xc000138580) Data frame received for 3\nI0519 10:49:43.123758 172 log.go:172] (0xc0006a6000) (3) Data frame handling\nI0519 10:49:43.125671 172 log.go:172] (0xc000138580) Data frame received for 1\nI0519 10:49:43.125703 172 log.go:172] (0xc0005f3400) (1) Data frame handling\nI0519 10:49:43.125733 172 log.go:172] (0xc0005f3400) (1) Data frame sent\nI0519 10:49:43.125762 172 log.go:172] (0xc000138580) (0xc0005f3400) Stream removed, broadcasting: 1\nI0519 10:49:43.125851 172 log.go:172] (0xc000138580) Go away received\nI0519 10:49:43.126010 172 log.go:172] (0xc000138580) (0xc0005f3400) Stream removed, broadcasting: 1\nI0519 10:49:43.126040 172 log.go:172] (0xc000138580) (0xc0006a6000) Stream removed, broadcasting: 3\nI0519 10:49:43.126063 172 log.go:172] (0xc000138580) (0xc0006a60a0) Stream removed, broadcasting: 5\n" May 19 10:49:43.131: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 10:49:43.131: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 10:49:43.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2kpbb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 10:49:43.318: INFO: stderr: "I0519 10:49:43.251991 195 log.go:172] (0xc000324420) (0xc00061d2c0) Create stream\nI0519 10:49:43.252057 195 log.go:172] (0xc000324420) (0xc00061d2c0) Stream added, broadcasting: 1\nI0519 10:49:43.254269 195 log.go:172] (0xc000324420) Reply frame received for 1\nI0519 10:49:43.254318 195 log.go:172] (0xc000324420) (0xc000652000) Create stream\nI0519 10:49:43.254343 195 log.go:172] (0xc000324420) (0xc000652000) Stream added, broadcasting: 3\nI0519 10:49:43.255090 195 log.go:172] (0xc000324420) Reply frame received for 3\nI0519 10:49:43.255122 195 log.go:172] (0xc000324420) (0xc000590000) Create stream\nI0519 10:49:43.255135 195 log.go:172] (0xc000324420) (0xc000590000) Stream added, broadcasting: 5\nI0519 10:49:43.255919 195 log.go:172] (0xc000324420) Reply frame received for 5\nI0519 10:49:43.311471 195 log.go:172] (0xc000324420) Data frame received for 3\nI0519 10:49:43.311525 195 log.go:172] (0xc000652000) (3) Data frame handling\nI0519 10:49:43.311541 195 log.go:172] (0xc000652000) (3) Data frame sent\nI0519 10:49:43.311555 195 log.go:172] (0xc000324420) Data frame received for 3\nI0519 10:49:43.311563 195 log.go:172] (0xc000652000) (3) Data frame handling\nI0519 10:49:43.311618 195 log.go:172] (0xc000324420) Data frame received for 5\nI0519 10:49:43.311644 195 log.go:172] (0xc000590000) (5) Data frame handling\nI0519 10:49:43.312920 195 log.go:172] (0xc000324420) Data frame received for 1\nI0519 10:49:43.312938 195 log.go:172] (0xc00061d2c0) (1) Data frame handling\nI0519 10:49:43.312951 195 log.go:172] (0xc00061d2c0) (1) Data frame sent\nI0519 10:49:43.312962 195 log.go:172] (0xc000324420) (0xc00061d2c0) Stream removed, broadcasting: 1\nI0519 10:49:43.312977 195 log.go:172] (0xc000324420) Go away received\nI0519 10:49:43.313371 195 log.go:172] (0xc000324420) (0xc00061d2c0) Stream removed, broadcasting: 1\nI0519 10:49:43.313405 195 log.go:172] (0xc000324420) (0xc000652000) Stream removed, broadcasting: 3\nI0519 10:49:43.313418 195 log.go:172] (0xc000324420) (0xc000590000) Stream removed, broadcasting: 5\n" May 19 10:49:43.318: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 10:49:43.318: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 10:49:43.318: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 19 10:50:13.355: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2kpbb May 19 10:50:13.360: INFO: Scaling statefulset ss to 0 May 19 10:50:13.375: INFO: Waiting for statefulset status.replicas updated to 0 May 19 10:50:13.377: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 10:50:13.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2kpbb" for this suite. May 19 10:50:19.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 10:50:19.439: INFO: namespace: e2e-tests-statefulset-2kpbb, resource: bindings, ignored listing per whitelist May 19 10:50:19.525: INFO: namespace e2e-tests-statefulset-2kpbb deletion completed in 6.133179191s • [SLOW TEST:98.655 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 10:50:19.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 10:50:19.644: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 19 10:50:19.663: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:19.665: INFO: Number of nodes with available pods: 0 May 19 10:50:19.665: INFO: Node hunter-worker is running more than one daemon pod May 19 10:50:20.671: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:20.674: INFO: Number of nodes with available pods: 0 May 19 10:50:20.674: INFO: Node hunter-worker is running more than one daemon pod May 19 10:50:21.671: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:21.675: INFO: Number of nodes with available pods: 0 May 19 10:50:21.675: INFO: Node hunter-worker is running more than one daemon pod May 19 10:50:22.742: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:22.745: INFO: Number of nodes with available pods: 0 May 19 10:50:22.745: INFO: Node hunter-worker is running more than one daemon pod May 19 10:50:23.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:23.680: INFO: Number of nodes with available pods: 0 May 19 10:50:23.680: INFO: Node hunter-worker is running more than one daemon pod May 19 10:50:24.670: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:24.674: INFO: Number of nodes with available pods: 2 May 19 10:50:24.674: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 19 10:50:24.704: INFO: Wrong image for pod: daemon-set-skj8c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:24.704: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:24.732: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:25.736: INFO: Wrong image for pod: daemon-set-skj8c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:25.736: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:25.740: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:26.736: INFO: Wrong image for pod: daemon-set-skj8c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:26.736: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:26.740: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:27.736: INFO: Wrong image for pod: daemon-set-skj8c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:27.736: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:27.743: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:28.737: INFO: Wrong image for pod: daemon-set-skj8c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:28.737: INFO: Pod daemon-set-skj8c is not available May 19 10:50:28.737: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:28.741: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:29.737: INFO: Wrong image for pod: daemon-set-skj8c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:29.737: INFO: Pod daemon-set-skj8c is not available May 19 10:50:29.737: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:29.742: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:30.736: INFO: Wrong image for pod: daemon-set-skj8c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:30.736: INFO: Pod daemon-set-skj8c is not available May 19 10:50:30.736: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:30.740: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:31.736: INFO: Pod daemon-set-qlpt6 is not available May 19 10:50:31.736: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:31.740: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:32.791: INFO: Pod daemon-set-qlpt6 is not available May 19 10:50:32.791: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:32.796: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:33.988: INFO: Pod daemon-set-qlpt6 is not available May 19 10:50:33.988: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:33.993: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:34.785: INFO: Pod daemon-set-qlpt6 is not available May 19 10:50:34.785: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:34.790: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:35.797: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:35.801: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:36.856: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:36.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:37.736: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:37.740: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:38.736: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:38.736: INFO: Pod daemon-set-twx5k is not available May 19 10:50:38.739: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:39.736: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:39.736: INFO: Pod daemon-set-twx5k is not available May 19 10:50:39.739: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:40.737: INFO: Wrong image for pod: daemon-set-twx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 10:50:40.737: INFO: Pod daemon-set-twx5k is not available May 19 10:50:40.741: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:41.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:42.737: INFO: Pod daemon-set-pd569 is not available May 19 10:50:42.741: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 19 10:50:42.744: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:42.747: INFO: Number of nodes with available pods: 1 May 19 10:50:42.747: INFO: Node hunter-worker2 is running more than one daemon pod May 19 10:50:44.097: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:44.101: INFO: Number of nodes with available pods: 1 May 19 10:50:44.101: INFO: Node hunter-worker2 is running more than one daemon pod May 19 10:50:44.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:44.825: INFO: Number of nodes with available pods: 1 May 19 10:50:44.825: INFO: Node hunter-worker2 is running more than one daemon pod May 19 10:50:45.751: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:45.755: INFO: Number of nodes with available pods: 1 May 19 10:50:45.755: INFO: Node hunter-worker2 is running more than one daemon pod May 19 10:50:46.752: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:46.755: INFO: Number of nodes with available pods: 1 May 19 10:50:46.755: INFO: Node hunter-worker2 is running more than one daemon pod May 19 10:50:47.752: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 10:50:47.756: INFO: Number of nodes with available pods: 2 May 19 10:50:47.756: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6gwgk, will wait for the garbage collector to delete the pods May 19 10:50:47.828: INFO: Deleting DaemonSet.extensions daemon-set took: 6.268071ms May 19 10:50:47.928: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.253955ms May 19 10:51:01.731: INFO: Number of nodes with available pods: 0 May 19 10:51:01.731: INFO: Number of running nodes: 0, number of available pods: 0 May 19 10:51:01.733: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6gwgk/daemonsets","resourceVersion":"11384832"},"items":null} May 19 10:51:01.735: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6gwgk/pods","resourceVersion":"11384832"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 10:51:01.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6gwgk" for this suite. May 19 10:51:07.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 10:51:07.788: INFO: namespace: e2e-tests-daemonsets-6gwgk, resource: bindings, ignored listing per whitelist May 19 10:51:07.833: INFO: namespace e2e-tests-daemonsets-6gwgk deletion completed in 6.0876611s • [SLOW TEST:48.308 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 10:51:07.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 19 10:51:08.498: INFO: Pod name wrapped-volume-race-a54ee4f6-99be-11ea-abcb-0242ac110018: Found 0 pods out of 5 May 19 10:51:13.505: INFO: Pod name wrapped-volume-race-a54ee4f6-99be-11ea-abcb-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a54ee4f6-99be-11ea-abcb-0242ac110018 in namespace e2e-tests-emptydir-wrapper-hwh82, will wait for the garbage collector to delete the pods May 19 10:53:05.604: INFO: Deleting ReplicationController wrapped-volume-race-a54ee4f6-99be-11ea-abcb-0242ac110018 took: 14.902071ms May 19 10:53:05.805: INFO: Terminating ReplicationController wrapped-volume-race-a54ee4f6-99be-11ea-abcb-0242ac110018 pods took: 200.61805ms STEP: Creating RC which spawns configmap-volume pods May 19 10:53:43.558: INFO: Pod name wrapped-volume-race-01bb00d4-99bf-11ea-abcb-0242ac110018: Found 0 pods out of 5 May 19 10:53:48.565: INFO: Pod name wrapped-volume-race-01bb00d4-99bf-11ea-abcb-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-01bb00d4-99bf-11ea-abcb-0242ac110018 in namespace e2e-tests-emptydir-wrapper-hwh82, will wait for the garbage collector to delete the pods May 19 10:55:46.728: INFO: Deleting ReplicationController wrapped-volume-race-01bb00d4-99bf-11ea-abcb-0242ac110018 took: 8.046072ms May 19 10:55:47.228: INFO: Terminating ReplicationController wrapped-volume-race-01bb00d4-99bf-11ea-abcb-0242ac110018 pods took: 500.260891ms STEP: Creating RC which spawns configmap-volume pods May 19 10:56:32.382: INFO: Pod name wrapped-volume-race-665b5d8d-99bf-11ea-abcb-0242ac110018: Found 0 pods out of 5 May 19 10:56:37.389: INFO: Pod name wrapped-volume-race-665b5d8d-99bf-11ea-abcb-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-665b5d8d-99bf-11ea-abcb-0242ac110018 in namespace e2e-tests-emptydir-wrapper-hwh82, will wait for the garbage collector to delete the pods May 19 10:58:51.502: INFO: Deleting ReplicationController wrapped-volume-race-665b5d8d-99bf-11ea-abcb-0242ac110018 took: 8.283053ms May 19 10:58:51.602: INFO: Terminating ReplicationController wrapped-volume-race-665b5d8d-99bf-11ea-abcb-0242ac110018 pods took: 100.218408ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 10:59:32.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-hwh82" for this suite. May 19 10:59:40.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 10:59:40.582: INFO: namespace: e2e-tests-emptydir-wrapper-hwh82, resource: bindings, ignored listing per whitelist May 19 10:59:40.584: INFO: namespace e2e-tests-emptydir-wrapper-hwh82 deletion completed in 8.251184669s • [SLOW TEST:512.751 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 10:59:40.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-gbd2 STEP: Creating a pod to test atomic-volume-subpath May 19 10:59:40.733: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gbd2" in namespace "e2e-tests-subpath-spwfp" to be "success or failure" May 19 10:59:40.737: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32983ms May 19 10:59:42.742: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008346495s May 19 10:59:44.844: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110703543s May 19 10:59:46.996: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26319634s May 19 10:59:49.001: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Running", Reason="", readiness=false. Elapsed: 8.26764011s May 19 10:59:51.005: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Running", Reason="", readiness=false. Elapsed: 10.272097834s May 19 10:59:53.010: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Running", Reason="", readiness=false. Elapsed: 12.27646559s May 19 10:59:55.012: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Running", Reason="", readiness=false. Elapsed: 14.279226997s May 19 10:59:57.016: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Running", Reason="", readiness=false. Elapsed: 16.28269632s May 19 10:59:59.019: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Running", Reason="", readiness=false. Elapsed: 18.286231035s May 19 11:00:01.024: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Running", Reason="", readiness=false. Elapsed: 20.290599134s May 19 11:00:03.028: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Running", Reason="", readiness=false. Elapsed: 22.295046913s May 19 11:00:05.032: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Running", Reason="", readiness=false. Elapsed: 24.299069224s May 19 11:00:07.036: INFO: Pod "pod-subpath-test-configmap-gbd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.302653162s STEP: Saw pod success May 19 11:00:07.036: INFO: Pod "pod-subpath-test-configmap-gbd2" satisfied condition "success or failure" May 19 11:00:07.038: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-gbd2 container test-container-subpath-configmap-gbd2: STEP: delete the pod May 19 11:00:07.063: INFO: Waiting for pod pod-subpath-test-configmap-gbd2 to disappear May 19 11:00:07.067: INFO: Pod pod-subpath-test-configmap-gbd2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-gbd2 May 19 11:00:07.067: INFO: Deleting pod "pod-subpath-test-configmap-gbd2" in namespace "e2e-tests-subpath-spwfp" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:00:07.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-spwfp" for this suite. May 19 11:00:13.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:00:13.269: INFO: namespace: e2e-tests-subpath-spwfp, resource: bindings, ignored listing per whitelist May 19 11:00:13.298: INFO: namespace e2e-tests-subpath-spwfp deletion completed in 6.226319436s • [SLOW TEST:32.714 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:00:13.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-ea2c7ced-99bf-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 11:00:13.525: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea2e6674-99bf-11ea-abcb-0242ac110018" in namespace "e2e-tests-configmap-x9cvr" to be "success or failure" May 19 11:00:13.727: INFO: Pod "pod-configmaps-ea2e6674-99bf-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 201.487817ms May 19 11:00:15.730: INFO: Pod "pod-configmaps-ea2e6674-99bf-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204684697s May 19 11:00:17.734: INFO: Pod "pod-configmaps-ea2e6674-99bf-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.208623418s STEP: Saw pod success May 19 11:00:17.734: INFO: Pod "pod-configmaps-ea2e6674-99bf-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:00:17.737: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-ea2e6674-99bf-11ea-abcb-0242ac110018 container configmap-volume-test: STEP: delete the pod May 19 11:00:17.844: INFO: Waiting for pod pod-configmaps-ea2e6674-99bf-11ea-abcb-0242ac110018 to disappear May 19 11:00:17.877: INFO: Pod pod-configmaps-ea2e6674-99bf-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:00:17.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-x9cvr" for this suite. May 19 11:00:23.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:00:24.002: INFO: namespace: e2e-tests-configmap-x9cvr, resource: bindings, ignored listing per whitelist May 19 11:00:24.011: INFO: namespace e2e-tests-configmap-x9cvr deletion completed in 6.130413407s • [SLOW TEST:10.713 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:00:24.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f08261a3-99bf-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 11:00:24.130: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f084624e-99bf-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-tvcch" to be "success or failure" May 19 11:00:24.135: INFO: Pod "pod-projected-configmaps-f084624e-99bf-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.135077ms May 19 11:00:26.139: INFO: Pod "pod-projected-configmaps-f084624e-99bf-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009048585s May 19 11:00:28.143: INFO: Pod "pod-projected-configmaps-f084624e-99bf-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012993546s STEP: Saw pod success May 19 11:00:28.143: INFO: Pod "pod-projected-configmaps-f084624e-99bf-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:00:28.146: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-f084624e-99bf-11ea-abcb-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 19 11:00:28.166: INFO: Waiting for pod pod-projected-configmaps-f084624e-99bf-11ea-abcb-0242ac110018 to disappear May 19 11:00:28.182: INFO: Pod pod-projected-configmaps-f084624e-99bf-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:00:28.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tvcch" for this suite. May 19 11:00:34.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:00:34.220: INFO: namespace: e2e-tests-projected-tvcch, resource: bindings, ignored listing per whitelist May 19 11:00:34.270: INFO: namespace e2e-tests-projected-tvcch deletion completed in 6.085098708s • [SLOW TEST:10.258 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:00:34.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 19 11:00:34.386: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:00:44.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-5rgwt" for this suite. May 19 11:00:50.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:00:51.058: INFO: namespace: e2e-tests-init-container-5rgwt, resource: bindings, ignored listing per whitelist May 19 11:00:51.061: INFO: namespace e2e-tests-init-container-5rgwt deletion completed in 6.140804491s • [SLOW TEST:16.791 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:00:51.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 19 11:00:51.655: INFO: Waiting up to 5m0s for pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx" in namespace "e2e-tests-svcaccounts-rwbc5" to be "success or failure" May 19 11:00:51.913: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx": Phase="Pending", Reason="", readiness=false. Elapsed: 257.945291ms May 19 11:00:53.917: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261923543s May 19 11:00:55.998: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342620018s May 19 11:00:58.010: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354926715s May 19 11:01:00.014: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358240274s May 19 11:01:02.017: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx": Phase="Running", Reason="", readiness=false. Elapsed: 10.361680833s May 19 11:01:04.020: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.364960718s STEP: Saw pod success May 19 11:01:04.020: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx" satisfied condition "success or failure" May 19 11:01:04.023: INFO: Trying to get logs from node hunter-worker pod pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx container token-test: STEP: delete the pod May 19 11:01:04.097: INFO: Waiting for pod pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx to disappear May 19 11:01:04.117: INFO: Pod pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-tmhbx no longer exists STEP: Creating a pod to test consume service account root CA May 19 11:01:04.120: INFO: Waiting up to 5m0s for pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r" in namespace "e2e-tests-svcaccounts-rwbc5" to be "success or failure" May 19 11:01:04.219: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r": Phase="Pending", Reason="", readiness=false. Elapsed: 98.978026ms May 19 11:01:06.223: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103222751s May 19 11:01:08.236: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115836562s May 19 11:01:10.240: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r": Phase="Running", Reason="", readiness=false. Elapsed: 6.120205902s May 19 11:01:12.245: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125166919s STEP: Saw pod success May 19 11:01:12.245: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r" satisfied condition "success or failure" May 19 11:01:12.249: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r container root-ca-test: STEP: delete the pod May 19 11:01:12.288: INFO: Waiting for pod pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r to disappear May 19 11:01:12.299: INFO: Pod pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-clf4r no longer exists STEP: Creating a pod to test consume service account namespace May 19 11:01:12.320: INFO: Waiting up to 5m0s for pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2" in namespace "e2e-tests-svcaccounts-rwbc5" to be "success or failure" May 19 11:01:12.324: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.302627ms May 19 11:01:14.328: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007339902s May 19 11:01:16.584: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263800361s May 19 11:01:18.588: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267915948s May 19 11:01:20.593: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.272949503s STEP: Saw pod success May 19 11:01:20.593: INFO: Pod "pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2" satisfied condition "success or failure" May 19 11:01:20.596: INFO: Trying to get logs from node hunter-worker pod pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2 container namespace-test: STEP: delete the pod May 19 11:01:20.643: INFO: Waiting for pod pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2 to disappear May 19 11:01:20.647: INFO: Pod pod-service-account-00eca3c9-99c0-11ea-abcb-0242ac110018-h9kq2 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:01:20.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-rwbc5" for this suite. May 19 11:01:26.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:01:26.779: INFO: namespace: e2e-tests-svcaccounts-rwbc5, resource: bindings, ignored listing per whitelist May 19 11:01:26.813: INFO: namespace e2e-tests-svcaccounts-rwbc5 deletion completed in 6.161248469s • [SLOW TEST:35.751 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:01:26.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zwrmj STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 11:01:27.011: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 11:01:57.174: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.128 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zwrmj PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:01:57.174: INFO: >>> kubeConfig: /root/.kube/config I0519 11:01:57.199130 6 log.go:172] (0xc001b8e2c0) (0xc001ef7c20) Create stream I0519 11:01:57.199157 6 log.go:172] (0xc001b8e2c0) (0xc001ef7c20) Stream added, broadcasting: 1 I0519 11:01:57.205763 6 log.go:172] (0xc001b8e2c0) Reply frame received for 1 I0519 11:01:57.205803 6 log.go:172] (0xc001b8e2c0) (0xc0016cc000) Create stream I0519 11:01:57.205813 6 log.go:172] (0xc001b8e2c0) (0xc0016cc000) Stream added, broadcasting: 3 I0519 11:01:57.206825 6 log.go:172] (0xc001b8e2c0) Reply frame received for 3 I0519 11:01:57.206860 6 log.go:172] (0xc001b8e2c0) (0xc00034e1e0) Create stream I0519 11:01:57.206875 6 log.go:172] (0xc001b8e2c0) (0xc00034e1e0) Stream added, broadcasting: 5 I0519 11:01:57.207688 6 log.go:172] (0xc001b8e2c0) Reply frame received for 5 I0519 11:01:58.289674 6 log.go:172] (0xc001b8e2c0) Data frame received for 3 I0519 11:01:58.289723 6 log.go:172] (0xc0016cc000) (3) Data frame handling I0519 11:01:58.289762 6 log.go:172] (0xc0016cc000) (3) Data frame sent I0519 11:01:58.289788 6 log.go:172] (0xc001b8e2c0) Data frame received for 3 I0519 11:01:58.289810 6 log.go:172] (0xc0016cc000) (3) Data frame handling I0519 11:01:58.290144 6 log.go:172] (0xc001b8e2c0) Data frame received for 5 I0519 11:01:58.290172 6 log.go:172] (0xc00034e1e0) (5) Data frame handling I0519 11:01:58.292008 6 log.go:172] (0xc001b8e2c0) Data frame received for 1 I0519 11:01:58.292046 6 log.go:172] (0xc001ef7c20) (1) Data frame handling I0519 11:01:58.292069 6 log.go:172] (0xc001ef7c20) (1) Data frame sent I0519 11:01:58.292108 6 log.go:172] (0xc001b8e2c0) (0xc001ef7c20) Stream removed, broadcasting: 1 I0519 11:01:58.292156 6 log.go:172] (0xc001b8e2c0) Go away received I0519 11:01:58.292341 6 log.go:172] (0xc001b8e2c0) (0xc001ef7c20) Stream removed, broadcasting: 1 I0519 11:01:58.292396 6 log.go:172] (0xc001b8e2c0) (0xc0016cc000) Stream removed, broadcasting: 3 I0519 11:01:58.292442 6 log.go:172] (0xc001b8e2c0) (0xc00034e1e0) Stream removed, broadcasting: 5 May 19 11:01:58.292: INFO: Found all expected endpoints: [netserver-0] May 19 11:01:58.406: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.159 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zwrmj PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:01:58.406: INFO: >>> kubeConfig: /root/.kube/config I0519 11:01:58.436984 6 log.go:172] (0xc001d28370) (0xc001fea140) Create stream I0519 11:01:58.437024 6 log.go:172] (0xc001d28370) (0xc001fea140) Stream added, broadcasting: 1 I0519 11:01:58.440063 6 log.go:172] (0xc001d28370) Reply frame received for 1 I0519 11:01:58.440116 6 log.go:172] (0xc001d28370) (0xc001fea1e0) Create stream I0519 11:01:58.440134 6 log.go:172] (0xc001d28370) (0xc001fea1e0) Stream added, broadcasting: 3 I0519 11:01:58.441468 6 log.go:172] (0xc001d28370) Reply frame received for 3 I0519 11:01:58.441576 6 log.go:172] (0xc001d28370) (0xc00034e960) Create stream I0519 11:01:58.441642 6 log.go:172] (0xc001d28370) (0xc00034e960) Stream added, broadcasting: 5 I0519 11:01:58.442903 6 log.go:172] (0xc001d28370) Reply frame received for 5 I0519 11:01:59.502144 6 log.go:172] (0xc001d28370) Data frame received for 3 I0519 11:01:59.502174 6 log.go:172] (0xc001fea1e0) (3) Data frame handling I0519 11:01:59.502195 6 log.go:172] (0xc001fea1e0) (3) Data frame sent I0519 11:01:59.502203 6 log.go:172] (0xc001d28370) Data frame received for 3 I0519 11:01:59.502214 6 log.go:172] (0xc001fea1e0) (3) Data frame handling I0519 11:01:59.502407 6 log.go:172] (0xc001d28370) Data frame received for 5 I0519 11:01:59.502419 6 log.go:172] (0xc00034e960) (5) Data frame handling I0519 11:01:59.504199 6 log.go:172] (0xc001d28370) Data frame received for 1 I0519 11:01:59.504234 6 log.go:172] (0xc001fea140) (1) Data frame handling I0519 11:01:59.504269 6 log.go:172] (0xc001fea140) (1) Data frame sent I0519 11:01:59.504297 6 log.go:172] (0xc001d28370) (0xc001fea140) Stream removed, broadcasting: 1 I0519 11:01:59.504317 6 log.go:172] (0xc001d28370) Go away received I0519 11:01:59.504517 6 log.go:172] (0xc001d28370) (0xc001fea140) Stream removed, broadcasting: 1 I0519 11:01:59.504548 6 log.go:172] (0xc001d28370) (0xc001fea1e0) Stream removed, broadcasting: 3 I0519 11:01:59.504571 6 log.go:172] (0xc001d28370) (0xc00034e960) Stream removed, broadcasting: 5 May 19 11:01:59.504: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:01:59.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-zwrmj" for this suite. May 19 11:02:21.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:02:21.542: INFO: namespace: e2e-tests-pod-network-test-zwrmj, resource: bindings, ignored listing per whitelist May 19 11:02:21.601: INFO: namespace e2e-tests-pod-network-test-zwrmj deletion completed in 22.091661714s • [SLOW TEST:54.788 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:02:21.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 11:02:21.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-frlll' May 19 11:02:24.223: INFO: stderr: "" May 19 11:02:24.223: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 19 11:02:24.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-frlll' May 19 11:02:31.267: INFO: stderr: "" May 19 11:02:31.267: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:02:31.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-frlll" for this suite. May 19 11:02:37.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:02:37.319: INFO: namespace: e2e-tests-kubectl-frlll, resource: bindings, ignored listing per whitelist May 19 11:02:37.359: INFO: namespace e2e-tests-kubectl-frlll deletion completed in 6.087915432s • [SLOW TEST:15.757 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:02:37.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:02:37.504: INFO: Creating deployment "test-recreate-deployment" May 19 11:02:37.522: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 19 11:02:37.563: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 19 11:02:39.893: INFO: Waiting deployment "test-recreate-deployment" to complete May 19 11:02:39.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725482957, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725482957, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725482957, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725482957, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 11:02:41.903: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 19 11:02:41.911: INFO: Updating deployment test-recreate-deployment May 19 11:02:41.911: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 19 11:02:42.630: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-l9wcx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l9wcx/deployments/test-recreate-deployment,UID:4004f3e0-99c0-11ea-99e8-0242ac110002,ResourceVersion:11386976,Generation:2,CreationTimestamp:2020-05-19 11:02:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-19 11:02:42 +0000 UTC 2020-05-19 11:02:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-19 11:02:42 +0000 UTC 2020-05-19 11:02:37 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 19 11:02:42.634: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-l9wcx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l9wcx/replicasets/test-recreate-deployment-589c4bfd,UID:42b74f1a-99c0-11ea-99e8-0242ac110002,ResourceVersion:11386974,Generation:1,CreationTimestamp:2020-05-19 11:02:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4004f3e0-99c0-11ea-99e8-0242ac110002 0xc0010c9e1f 0xc0010c9e30}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 11:02:42.634: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 19 11:02:42.634: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-l9wcx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l9wcx/replicasets/test-recreate-deployment-5bf7f65dc,UID:400dc78b-99c0-11ea-99e8-0242ac110002,ResourceVersion:11386965,Generation:2,CreationTimestamp:2020-05-19 11:02:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4004f3e0-99c0-11ea-99e8-0242ac110002 0xc0010c9f50 0xc0010c9f51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 11:02:42.638: INFO: Pod "test-recreate-deployment-589c4bfd-rvfhw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-rvfhw,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-l9wcx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-l9wcx/pods/test-recreate-deployment-589c4bfd-rvfhw,UID:42b7b9fd-99c0-11ea-99e8-0242ac110002,ResourceVersion:11386978,Generation:0,CreationTimestamp:2020-05-19 11:02:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 42b74f1a-99c0-11ea-99e8-0242ac110002 0xc001a0f29f 0xc001a0f2b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dn2gv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dn2gv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dn2gv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0f320} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0f340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:02:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:02:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-19 11:02:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:02:42.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-l9wcx" for this suite. May 19 11:02:48.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:02:48.687: INFO: namespace: e2e-tests-deployment-l9wcx, resource: bindings, ignored listing per whitelist May 19 11:02:48.727: INFO: namespace e2e-tests-deployment-l9wcx deletion completed in 6.086201763s • [SLOW TEST:11.368 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:02:48.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-4704a593-99c0-11ea-abcb-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:02:55.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rlsfr" for this suite. May 19 11:03:17.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:03:17.485: INFO: namespace: e2e-tests-configmap-rlsfr, resource: bindings, ignored listing per whitelist May 19 11:03:17.531: INFO: namespace e2e-tests-configmap-rlsfr deletion completed in 22.072418164s • [SLOW TEST:28.804 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:03:17.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 19 11:03:17.673: INFO: Waiting up to 5m0s for pod "client-containers-57f1069b-99c0-11ea-abcb-0242ac110018" in namespace "e2e-tests-containers-fz7cg" to be "success or failure" May 19 11:03:17.679: INFO: Pod "client-containers-57f1069b-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496478ms May 19 11:03:19.683: INFO: Pod "client-containers-57f1069b-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010004686s May 19 11:03:21.686: INFO: Pod "client-containers-57f1069b-99c0-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013912554s STEP: Saw pod success May 19 11:03:21.687: INFO: Pod "client-containers-57f1069b-99c0-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:03:21.689: INFO: Trying to get logs from node hunter-worker2 pod client-containers-57f1069b-99c0-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:03:21.882: INFO: Waiting for pod client-containers-57f1069b-99c0-11ea-abcb-0242ac110018 to disappear May 19 11:03:21.911: INFO: Pod client-containers-57f1069b-99c0-11ea-abcb-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:03:21.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-fz7cg" for this suite. May 19 11:03:27.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:03:27.980: INFO: namespace: e2e-tests-containers-fz7cg, resource: bindings, ignored listing per whitelist May 19 11:03:28.004: INFO: namespace e2e-tests-containers-fz7cg deletion completed in 6.089275254s • [SLOW TEST:10.472 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:03:28.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 19 11:03:28.139: INFO: Waiting up to 5m0s for pod "pod-5e2cdab1-99c0-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-mmg5b" to be "success or failure" May 19 11:03:28.151: INFO: Pod "pod-5e2cdab1-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.863192ms May 19 11:03:30.155: INFO: Pod "pod-5e2cdab1-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015856608s May 19 11:03:32.158: INFO: Pod "pod-5e2cdab1-99c0-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019394426s STEP: Saw pod success May 19 11:03:32.159: INFO: Pod "pod-5e2cdab1-99c0-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:03:32.161: INFO: Trying to get logs from node hunter-worker pod pod-5e2cdab1-99c0-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:03:32.403: INFO: Waiting for pod pod-5e2cdab1-99c0-11ea-abcb-0242ac110018 to disappear May 19 11:03:32.414: INFO: Pod pod-5e2cdab1-99c0-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:03:32.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mmg5b" for this suite. May 19 11:03:38.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:03:38.463: INFO: namespace: e2e-tests-emptydir-mmg5b, resource: bindings, ignored listing per whitelist May 19 11:03:38.524: INFO: namespace e2e-tests-emptydir-mmg5b deletion completed in 6.107085663s • [SLOW TEST:10.520 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:03:38.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:03:38.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-646db2ce-99c0-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-td5mv" to be "success or failure" May 19 11:03:38.635: INFO: Pod "downwardapi-volume-646db2ce-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 38.665612ms May 19 11:03:40.639: INFO: Pod "downwardapi-volume-646db2ce-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043168652s May 19 11:03:42.644: INFO: Pod "downwardapi-volume-646db2ce-99c0-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.047864111s May 19 11:03:44.647: INFO: Pod "downwardapi-volume-646db2ce-99c0-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051360323s STEP: Saw pod success May 19 11:03:44.647: INFO: Pod "downwardapi-volume-646db2ce-99c0-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:03:44.650: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-646db2ce-99c0-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:03:44.679: INFO: Waiting for pod downwardapi-volume-646db2ce-99c0-11ea-abcb-0242ac110018 to disappear May 19 11:03:44.686: INFO: Pod downwardapi-volume-646db2ce-99c0-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:03:44.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-td5mv" for this suite. May 19 11:03:50.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:03:50.719: INFO: namespace: e2e-tests-downward-api-td5mv, resource: bindings, ignored listing per whitelist May 19 11:03:50.770: INFO: namespace e2e-tests-downward-api-td5mv deletion completed in 6.08114639s • [SLOW TEST:12.246 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:03:50.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 19 11:03:55.405: INFO: Successfully updated pod "labelsupdate6bbc3519-99c0-11ea-abcb-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:03:59.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x5z28" for this suite. May 19 11:04:21.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:04:21.505: INFO: namespace: e2e-tests-projected-x5z28, resource: bindings, ignored listing per whitelist May 19 11:04:21.513: INFO: namespace e2e-tests-projected-x5z28 deletion completed in 22.084660144s • [SLOW TEST:30.743 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:04:21.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7e1f9c97-99c0-11ea-abcb-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7e1f9c97-99c0-11ea-abcb-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:05:40.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m627n" for this suite. May 19 11:06:02.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:06:02.192: INFO: namespace: e2e-tests-projected-m627n, resource: bindings, ignored listing per whitelist May 19 11:06:02.254: INFO: namespace e2e-tests-projected-m627n deletion completed in 22.08632829s • [SLOW TEST:100.741 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:06:02.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 19 11:06:02.382: INFO: Waiting up to 5m0s for pod "downward-api-ba21960e-99c0-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-rl4pc" to be "success or failure" May 19 11:06:02.389: INFO: Pod "downward-api-ba21960e-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588626ms May 19 11:06:04.393: INFO: Pod "downward-api-ba21960e-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010849792s May 19 11:06:06.396: INFO: Pod "downward-api-ba21960e-99c0-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014076587s STEP: Saw pod success May 19 11:06:06.396: INFO: Pod "downward-api-ba21960e-99c0-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:06:06.399: INFO: Trying to get logs from node hunter-worker2 pod downward-api-ba21960e-99c0-11ea-abcb-0242ac110018 container dapi-container: STEP: delete the pod May 19 11:06:06.420: INFO: Waiting for pod downward-api-ba21960e-99c0-11ea-abcb-0242ac110018 to disappear May 19 11:06:06.424: INFO: Pod downward-api-ba21960e-99c0-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:06:06.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rl4pc" for this suite. May 19 11:06:12.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:06:12.519: INFO: namespace: e2e-tests-downward-api-rl4pc, resource: bindings, ignored listing per whitelist May 19 11:06:12.526: INFO: namespace e2e-tests-downward-api-rl4pc deletion completed in 6.099475229s • [SLOW TEST:10.272 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:06:12.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-c03fd1ed-99c0-11ea-abcb-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-c03fd1d8-99c0-11ea-abcb-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin May 19 11:06:12.666: INFO: Waiting up to 5m0s for pod "projected-volume-c03fd19e-99c0-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-lb9ls" to be "success or failure" May 19 11:06:12.683: INFO: Pod "projected-volume-c03fd19e-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.72428ms May 19 11:06:14.739: INFO: Pod "projected-volume-c03fd19e-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073410712s May 19 11:06:16.743: INFO: Pod "projected-volume-c03fd19e-99c0-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077402828s STEP: Saw pod success May 19 11:06:16.743: INFO: Pod "projected-volume-c03fd19e-99c0-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:06:16.746: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-c03fd19e-99c0-11ea-abcb-0242ac110018 container projected-all-volume-test: STEP: delete the pod May 19 11:06:16.798: INFO: Waiting for pod projected-volume-c03fd19e-99c0-11ea-abcb-0242ac110018 to disappear May 19 11:06:16.819: INFO: Pod projected-volume-c03fd19e-99c0-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:06:16.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lb9ls" for this suite. May 19 11:06:22.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:06:22.869: INFO: namespace: e2e-tests-projected-lb9ls, resource: bindings, ignored listing per whitelist May 19 11:06:22.908: INFO: namespace e2e-tests-projected-lb9ls deletion completed in 6.084768303s • [SLOW TEST:10.382 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:06:22.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:06:23.018: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 19 11:06:23.023: INFO: Number of nodes with available pods: 0 May 19 11:06:23.023: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 19 11:06:23.074: INFO: Number of nodes with available pods: 0 May 19 11:06:23.074: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:24.078: INFO: Number of nodes with available pods: 0 May 19 11:06:24.078: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:25.078: INFO: Number of nodes with available pods: 0 May 19 11:06:25.078: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:26.078: INFO: Number of nodes with available pods: 0 May 19 11:06:26.078: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:27.079: INFO: Number of nodes with available pods: 1 May 19 11:06:27.079: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 19 11:06:27.158: INFO: Number of nodes with available pods: 1 May 19 11:06:27.158: INFO: Number of running nodes: 0, number of available pods: 1 May 19 11:06:28.163: INFO: Number of nodes with available pods: 0 May 19 11:06:28.163: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 19 11:06:28.195: INFO: Number of nodes with available pods: 0 May 19 11:06:28.195: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:29.199: INFO: Number of nodes with available pods: 0 May 19 11:06:29.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:30.199: INFO: Number of nodes with available pods: 0 May 19 11:06:30.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:31.199: INFO: Number of nodes with available pods: 0 May 19 11:06:31.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:32.200: INFO: Number of nodes with available pods: 0 May 19 11:06:32.200: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:33.199: INFO: Number of nodes with available pods: 0 May 19 11:06:33.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:34.199: INFO: Number of nodes with available pods: 0 May 19 11:06:34.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:35.200: INFO: Number of nodes with available pods: 0 May 19 11:06:35.200: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:36.199: INFO: Number of nodes with available pods: 0 May 19 11:06:36.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:37.200: INFO: Number of nodes with available pods: 0 May 19 11:06:37.200: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:38.199: INFO: Number of nodes with available pods: 0 May 19 11:06:38.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:39.199: INFO: Number of nodes with available pods: 0 May 19 11:06:39.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:40.199: INFO: Number of nodes with available pods: 0 May 19 11:06:40.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:41.198: INFO: Number of nodes with available pods: 0 May 19 11:06:41.198: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:42.200: INFO: Number of nodes with available pods: 0 May 19 11:06:42.200: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:43.199: INFO: Number of nodes with available pods: 0 May 19 11:06:43.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:44.199: INFO: Number of nodes with available pods: 0 May 19 11:06:44.199: INFO: Node hunter-worker is running more than one daemon pod May 19 11:06:45.199: INFO: Number of nodes with available pods: 1 May 19 11:06:45.199: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zgp7f, will wait for the garbage collector to delete the pods May 19 11:06:45.262: INFO: Deleting DaemonSet.extensions daemon-set took: 5.28465ms May 19 11:06:45.362: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.173811ms May 19 11:06:51.422: INFO: Number of nodes with available pods: 0 May 19 11:06:51.422: INFO: Number of running nodes: 0, number of available pods: 0 May 19 11:06:51.425: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zgp7f/daemonsets","resourceVersion":"11387741"},"items":null} May 19 11:06:51.462: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zgp7f/pods","resourceVersion":"11387742"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:06:51.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-zgp7f" for this suite. May 19 11:06:57.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:06:57.597: INFO: namespace: e2e-tests-daemonsets-zgp7f, resource: bindings, ignored listing per whitelist May 19 11:06:57.597: INFO: namespace e2e-tests-daemonsets-zgp7f deletion completed in 6.103757996s • [SLOW TEST:34.689 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:06:57.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5rkm6 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5rkm6;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5rkm6 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5rkm6.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5rkm6.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5rkm6.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5rkm6.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-5rkm6.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5rkm6.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-5rkm6.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5rkm6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.144.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.144.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.144.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.144.160_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5rkm6 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5rkm6;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5rkm6 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5rkm6.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5rkm6.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5rkm6.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-5rkm6.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5rkm6.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-5rkm6.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5rkm6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.144.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.144.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.144.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.144.160_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 11:07:05.871: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.881: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.907: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.910: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.912: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.915: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.918: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.921: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.925: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.927: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:05.947: INFO: Lookups using e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc] May 19 11:07:10.965: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:10.974: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:10.998: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:11.001: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:11.003: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:11.006: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:11.008: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:11.011: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:11.013: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:11.016: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:11.033: INFO: Lookups using e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc] May 19 11:07:15.962: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:15.971: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:15.994: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:15.997: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:16.000: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:16.003: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:16.006: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:16.008: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:16.011: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:16.013: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:16.031: INFO: Lookups using e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc] May 19 11:07:20.962: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:20.970: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:20.990: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:20.992: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:20.995: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:20.998: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:21.001: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:21.004: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:21.006: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:21.010: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:21.026: INFO: Lookups using e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc] May 19 11:07:25.960: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:25.968: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:25.985: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:25.987: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:25.989: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:25.991: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:25.993: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:25.995: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:25.997: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:25.999: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:26.012: INFO: Lookups using e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc] May 19 11:07:30.962: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.004: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.027: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.030: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.033: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.035: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.038: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.041: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.044: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.047: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc from pod e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018: the server could not find the requested resource (get pods dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018) May 19 11:07:31.083: INFO: Lookups using e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-5rkm6 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5rkm6 jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6 jessie_udp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@dns-test-service.e2e-tests-dns-5rkm6.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5rkm6.svc] May 19 11:07:36.021: INFO: DNS probes using e2e-tests-dns-5rkm6/dns-test-db2aa8a5-99c0-11ea-abcb-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:07:36.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-5rkm6" for this suite. May 19 11:07:44.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:07:44.182: INFO: namespace: e2e-tests-dns-5rkm6, resource: bindings, ignored listing per whitelist May 19 11:07:44.239: INFO: namespace e2e-tests-dns-5rkm6 deletion completed in 8.08794334s • [SLOW TEST:46.641 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:07:44.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-5z425/secret-test-f6e527a3-99c0-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 11:07:44.423: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6eb2105-99c0-11ea-abcb-0242ac110018" in namespace "e2e-tests-secrets-5z425" to be "success or failure" May 19 11:07:44.429: INFO: Pod "pod-configmaps-f6eb2105-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.488122ms May 19 11:07:46.433: INFO: Pod "pod-configmaps-f6eb2105-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01002615s May 19 11:07:48.436: INFO: Pod "pod-configmaps-f6eb2105-99c0-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013205512s May 19 11:07:50.440: INFO: Pod "pod-configmaps-f6eb2105-99c0-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017585532s STEP: Saw pod success May 19 11:07:50.440: INFO: Pod "pod-configmaps-f6eb2105-99c0-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:07:50.443: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f6eb2105-99c0-11ea-abcb-0242ac110018 container env-test: STEP: delete the pod May 19 11:07:50.458: INFO: Waiting for pod pod-configmaps-f6eb2105-99c0-11ea-abcb-0242ac110018 to disappear May 19 11:07:50.462: INFO: Pod pod-configmaps-f6eb2105-99c0-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:07:50.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5z425" for this suite. May 19 11:07:58.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:07:58.526: INFO: namespace: e2e-tests-secrets-5z425, resource: bindings, ignored listing per whitelist May 19 11:07:58.549: INFO: namespace e2e-tests-secrets-5z425 deletion completed in 8.083661687s • [SLOW TEST:14.310 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:07:58.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5c52r [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-5c52r STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-5c52r May 19 11:07:58.666: INFO: Found 0 stateful pods, waiting for 1 May 19 11:08:08.671: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 19 11:08:08.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 11:08:08.922: INFO: stderr: "I0519 11:08:08.791253 267 log.go:172] (0xc00074a420) (0xc0006eb400) Create stream\nI0519 11:08:08.791307 267 log.go:172] (0xc00074a420) (0xc0006eb400) Stream added, broadcasting: 1\nI0519 11:08:08.793800 267 log.go:172] (0xc00074a420) Reply frame received for 1\nI0519 11:08:08.793827 267 log.go:172] (0xc00074a420) (0xc0006eb4a0) Create stream\nI0519 11:08:08.793834 267 log.go:172] (0xc00074a420) (0xc0006eb4a0) Stream added, broadcasting: 3\nI0519 11:08:08.794651 267 log.go:172] (0xc00074a420) Reply frame received for 3\nI0519 11:08:08.794682 267 log.go:172] (0xc00074a420) (0xc0006dc000) Create stream\nI0519 11:08:08.794699 267 log.go:172] (0xc00074a420) (0xc0006dc000) Stream added, broadcasting: 5\nI0519 11:08:08.795547 267 log.go:172] (0xc00074a420) Reply frame received for 5\nI0519 11:08:08.914292 267 log.go:172] (0xc00074a420) Data frame received for 3\nI0519 11:08:08.914352 267 log.go:172] (0xc0006eb4a0) (3) Data frame handling\nI0519 11:08:08.914390 267 log.go:172] (0xc0006eb4a0) (3) Data frame sent\nI0519 11:08:08.914413 267 log.go:172] (0xc00074a420) Data frame received for 3\nI0519 11:08:08.914430 267 log.go:172] (0xc0006eb4a0) (3) Data frame handling\nI0519 11:08:08.914659 267 log.go:172] (0xc00074a420) Data frame received for 5\nI0519 11:08:08.914690 267 log.go:172] (0xc0006dc000) (5) Data frame handling\nI0519 11:08:08.916711 267 log.go:172] (0xc00074a420) Data frame received for 1\nI0519 11:08:08.916756 267 log.go:172] (0xc0006eb400) (1) Data frame handling\nI0519 11:08:08.916772 267 log.go:172] (0xc0006eb400) (1) Data frame sent\nI0519 11:08:08.916789 267 log.go:172] (0xc00074a420) (0xc0006eb400) Stream removed, broadcasting: 1\nI0519 11:08:08.916813 267 log.go:172] (0xc00074a420) Go away received\nI0519 11:08:08.917477 267 log.go:172] (0xc00074a420) (0xc0006eb400) Stream removed, broadcasting: 1\nI0519 11:08:08.917505 267 log.go:172] (0xc00074a420) (0xc0006eb4a0) Stream removed, broadcasting: 3\nI0519 11:08:08.917519 267 log.go:172] (0xc00074a420) (0xc0006dc000) Stream removed, broadcasting: 5\n" May 19 11:08:08.922: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 11:08:08.922: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 11:08:08.926: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 19 11:08:18.931: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 11:08:18.931: INFO: Waiting for statefulset status.replicas updated to 0 May 19 11:08:18.948: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:18.948: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:18.948: INFO: May 19 11:08:18.948: INFO: StatefulSet ss has not reached scale 3, at 1 May 19 11:08:19.952: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997198652s May 19 11:08:20.956: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993517536s May 19 11:08:21.960: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98932704s May 19 11:08:22.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985199442s May 19 11:08:23.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.981368724s May 19 11:08:24.977: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972496735s May 19 11:08:26.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.968147778s May 19 11:08:27.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.886424296s May 19 11:08:28.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 880.885252ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-5c52r May 19 11:08:29.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:08:29.335: INFO: stderr: "I0519 11:08:29.270558 290 log.go:172] (0xc00013a580) (0xc00070a5a0) Create stream\nI0519 11:08:29.270630 290 log.go:172] (0xc00013a580) (0xc00070a5a0) Stream added, broadcasting: 1\nI0519 11:08:29.273691 290 log.go:172] (0xc00013a580) Reply frame received for 1\nI0519 11:08:29.273738 290 log.go:172] (0xc00013a580) (0xc0001a6d20) Create stream\nI0519 11:08:29.273752 290 log.go:172] (0xc00013a580) (0xc0001a6d20) Stream added, broadcasting: 3\nI0519 11:08:29.274919 290 log.go:172] (0xc00013a580) Reply frame received for 3\nI0519 11:08:29.274996 290 log.go:172] (0xc00013a580) (0xc00072a000) Create stream\nI0519 11:08:29.275017 290 log.go:172] (0xc00013a580) (0xc00072a000) Stream added, broadcasting: 5\nI0519 11:08:29.276164 290 log.go:172] (0xc00013a580) Reply frame received for 5\nI0519 11:08:29.327929 290 log.go:172] (0xc00013a580) Data frame received for 3\nI0519 11:08:29.327971 290 log.go:172] (0xc0001a6d20) (3) Data frame handling\nI0519 11:08:29.328017 290 log.go:172] (0xc0001a6d20) (3) Data frame sent\nI0519 11:08:29.328037 290 log.go:172] (0xc00013a580) Data frame received for 3\nI0519 11:08:29.328051 290 log.go:172] (0xc0001a6d20) (3) Data frame handling\nI0519 11:08:29.328225 290 log.go:172] (0xc00013a580) Data frame received for 5\nI0519 11:08:29.328246 290 log.go:172] (0xc00072a000) (5) Data frame handling\nI0519 11:08:29.329702 290 log.go:172] (0xc00013a580) Data frame received for 1\nI0519 11:08:29.329724 290 log.go:172] (0xc00070a5a0) (1) Data frame handling\nI0519 11:08:29.329738 290 log.go:172] (0xc00070a5a0) (1) Data frame sent\nI0519 11:08:29.329747 290 log.go:172] (0xc00013a580) (0xc00070a5a0) Stream removed, broadcasting: 1\nI0519 11:08:29.329868 290 log.go:172] (0xc00013a580) (0xc00070a5a0) Stream removed, broadcasting: 1\nI0519 11:08:29.329883 290 log.go:172] (0xc00013a580) (0xc0001a6d20) Stream removed, broadcasting: 3\nI0519 11:08:29.329891 290 log.go:172] (0xc00013a580) (0xc00072a000) Stream removed, broadcasting: 5\n" May 19 11:08:29.335: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 11:08:29.335: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 11:08:29.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:08:29.553: INFO: stderr: "I0519 11:08:29.471344 312 log.go:172] (0xc0002042c0) (0xc0002d8780) Create stream\nI0519 11:08:29.471404 312 log.go:172] (0xc0002042c0) (0xc0002d8780) Stream added, broadcasting: 1\nI0519 11:08:29.473674 312 log.go:172] (0xc0002042c0) Reply frame received for 1\nI0519 11:08:29.474144 312 log.go:172] (0xc0002042c0) (0xc0008ce000) Create stream\nI0519 11:08:29.474172 312 log.go:172] (0xc0002042c0) (0xc0008ce000) Stream added, broadcasting: 3\nI0519 11:08:29.475592 312 log.go:172] (0xc0002042c0) Reply frame received for 3\nI0519 11:08:29.475637 312 log.go:172] (0xc0002042c0) (0xc0008ce0a0) Create stream\nI0519 11:08:29.475652 312 log.go:172] (0xc0002042c0) (0xc0008ce0a0) Stream added, broadcasting: 5\nI0519 11:08:29.476791 312 log.go:172] (0xc0002042c0) Reply frame received for 5\nI0519 11:08:29.547207 312 log.go:172] (0xc0002042c0) Data frame received for 5\nI0519 11:08:29.547243 312 log.go:172] (0xc0008ce0a0) (5) Data frame handling\nI0519 11:08:29.547252 312 log.go:172] (0xc0008ce0a0) (5) Data frame sent\nI0519 11:08:29.547256 312 log.go:172] (0xc0002042c0) Data frame received for 5\nI0519 11:08:29.547261 312 log.go:172] (0xc0008ce0a0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0519 11:08:29.547290 312 log.go:172] (0xc0002042c0) Data frame received for 3\nI0519 11:08:29.547298 312 log.go:172] (0xc0008ce000) (3) Data frame handling\nI0519 11:08:29.547305 312 log.go:172] (0xc0008ce000) (3) Data frame sent\nI0519 11:08:29.547312 312 log.go:172] (0xc0002042c0) Data frame received for 3\nI0519 11:08:29.547316 312 log.go:172] (0xc0008ce000) (3) Data frame handling\nI0519 11:08:29.548762 312 log.go:172] (0xc0002042c0) Data frame received for 1\nI0519 11:08:29.548784 312 log.go:172] (0xc0002d8780) (1) Data frame handling\nI0519 11:08:29.548798 312 log.go:172] (0xc0002d8780) (1) Data frame sent\nI0519 11:08:29.548814 312 log.go:172] (0xc0002042c0) (0xc0002d8780) Stream removed, broadcasting: 1\nI0519 11:08:29.548993 312 log.go:172] (0xc0002042c0) (0xc0002d8780) Stream removed, broadcasting: 1\nI0519 11:08:29.549013 312 log.go:172] (0xc0002042c0) (0xc0008ce000) Stream removed, broadcasting: 3\nI0519 11:08:29.549026 312 log.go:172] (0xc0002042c0) (0xc0008ce0a0) Stream removed, broadcasting: 5\n" May 19 11:08:29.553: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 11:08:29.553: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 11:08:29.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:08:29.730: INFO: stderr: "I0519 11:08:29.667462 335 log.go:172] (0xc000138790) (0xc000583540) Create stream\nI0519 11:08:29.667511 335 log.go:172] (0xc000138790) (0xc000583540) Stream added, broadcasting: 1\nI0519 11:08:29.669763 335 log.go:172] (0xc000138790) Reply frame received for 1\nI0519 11:08:29.669797 335 log.go:172] (0xc000138790) (0xc00082e000) Create stream\nI0519 11:08:29.669817 335 log.go:172] (0xc000138790) (0xc00082e000) Stream added, broadcasting: 3\nI0519 11:08:29.670696 335 log.go:172] (0xc000138790) Reply frame received for 3\nI0519 11:08:29.670725 335 log.go:172] (0xc000138790) (0xc0005835e0) Create stream\nI0519 11:08:29.670734 335 log.go:172] (0xc000138790) (0xc0005835e0) Stream added, broadcasting: 5\nI0519 11:08:29.671507 335 log.go:172] (0xc000138790) Reply frame received for 5\nI0519 11:08:29.724724 335 log.go:172] (0xc000138790) Data frame received for 3\nI0519 11:08:29.724793 335 log.go:172] (0xc000138790) Data frame received for 5\nI0519 11:08:29.724855 335 log.go:172] (0xc0005835e0) (5) Data frame handling\nI0519 11:08:29.724881 335 log.go:172] (0xc0005835e0) (5) Data frame sent\nI0519 11:08:29.724894 335 log.go:172] (0xc000138790) Data frame received for 5\nI0519 11:08:29.724907 335 log.go:172] (0xc0005835e0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0519 11:08:29.724923 335 log.go:172] (0xc00082e000) (3) Data frame handling\nI0519 11:08:29.724949 335 log.go:172] (0xc00082e000) (3) Data frame sent\nI0519 11:08:29.724962 335 log.go:172] (0xc000138790) Data frame received for 3\nI0519 11:08:29.724971 335 log.go:172] (0xc00082e000) (3) Data frame handling\nI0519 11:08:29.726352 335 log.go:172] (0xc000138790) Data frame received for 1\nI0519 11:08:29.726369 335 log.go:172] (0xc000583540) (1) Data frame handling\nI0519 11:08:29.726377 335 log.go:172] (0xc000583540) (1) Data frame sent\nI0519 11:08:29.726386 335 log.go:172] (0xc000138790) (0xc000583540) Stream removed, broadcasting: 1\nI0519 11:08:29.726434 335 log.go:172] (0xc000138790) Go away received\nI0519 11:08:29.726510 335 log.go:172] (0xc000138790) (0xc000583540) Stream removed, broadcasting: 1\nI0519 11:08:29.726531 335 log.go:172] (0xc000138790) (0xc00082e000) Stream removed, broadcasting: 3\nI0519 11:08:29.726543 335 log.go:172] (0xc000138790) (0xc0005835e0) Stream removed, broadcasting: 5\n" May 19 11:08:29.730: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 11:08:29.730: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 11:08:29.734: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 19 11:08:39.739: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 19 11:08:39.739: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 19 11:08:39.739: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 19 11:08:39.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 11:08:39.968: INFO: stderr: "I0519 11:08:39.875531 357 log.go:172] (0xc000780160) (0xc0006f65a0) Create stream\nI0519 11:08:39.875587 357 log.go:172] (0xc000780160) (0xc0006f65a0) Stream added, broadcasting: 1\nI0519 11:08:39.877799 357 log.go:172] (0xc000780160) Reply frame received for 1\nI0519 11:08:39.877827 357 log.go:172] (0xc000780160) (0xc0007c8b40) Create stream\nI0519 11:08:39.877837 357 log.go:172] (0xc000780160) (0xc0007c8b40) Stream added, broadcasting: 3\nI0519 11:08:39.878434 357 log.go:172] (0xc000780160) Reply frame received for 3\nI0519 11:08:39.878472 357 log.go:172] (0xc000780160) (0xc000676000) Create stream\nI0519 11:08:39.878484 357 log.go:172] (0xc000780160) (0xc000676000) Stream added, broadcasting: 5\nI0519 11:08:39.879190 357 log.go:172] (0xc000780160) Reply frame received for 5\nI0519 11:08:39.961075 357 log.go:172] (0xc000780160) Data frame received for 3\nI0519 11:08:39.961226 357 log.go:172] (0xc0007c8b40) (3) Data frame handling\nI0519 11:08:39.961238 357 log.go:172] (0xc0007c8b40) (3) Data frame sent\nI0519 11:08:39.961245 357 log.go:172] (0xc000780160) Data frame received for 3\nI0519 11:08:39.961250 357 log.go:172] (0xc0007c8b40) (3) Data frame handling\nI0519 11:08:39.961293 357 log.go:172] (0xc000780160) Data frame received for 5\nI0519 11:08:39.961318 357 log.go:172] (0xc000676000) (5) Data frame handling\nI0519 11:08:39.963202 357 log.go:172] (0xc000780160) Data frame received for 1\nI0519 11:08:39.963238 357 log.go:172] (0xc0006f65a0) (1) Data frame handling\nI0519 11:08:39.963258 357 log.go:172] (0xc0006f65a0) (1) Data frame sent\nI0519 11:08:39.963278 357 log.go:172] (0xc000780160) (0xc0006f65a0) Stream removed, broadcasting: 1\nI0519 11:08:39.963363 357 log.go:172] (0xc000780160) Go away received\nI0519 11:08:39.963423 357 log.go:172] (0xc000780160) (0xc0006f65a0) Stream removed, broadcasting: 1\nI0519 11:08:39.963435 357 log.go:172] (0xc000780160) (0xc0007c8b40) Stream removed, broadcasting: 3\nI0519 11:08:39.963441 357 log.go:172] (0xc000780160) (0xc000676000) Stream removed, broadcasting: 5\n" May 19 11:08:39.968: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 11:08:39.968: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 11:08:39.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 11:08:40.180: INFO: stderr: "I0519 11:08:40.079008 379 log.go:172] (0xc00013a630) (0xc000716640) Create stream\nI0519 11:08:40.079076 379 log.go:172] (0xc00013a630) (0xc000716640) Stream added, broadcasting: 1\nI0519 11:08:40.081641 379 log.go:172] (0xc00013a630) Reply frame received for 1\nI0519 11:08:40.081670 379 log.go:172] (0xc00013a630) (0xc0007166e0) Create stream\nI0519 11:08:40.081678 379 log.go:172] (0xc00013a630) (0xc0007166e0) Stream added, broadcasting: 3\nI0519 11:08:40.082721 379 log.go:172] (0xc00013a630) Reply frame received for 3\nI0519 11:08:40.082749 379 log.go:172] (0xc00013a630) (0xc000606dc0) Create stream\nI0519 11:08:40.082758 379 log.go:172] (0xc00013a630) (0xc000606dc0) Stream added, broadcasting: 5\nI0519 11:08:40.083668 379 log.go:172] (0xc00013a630) Reply frame received for 5\nI0519 11:08:40.172965 379 log.go:172] (0xc00013a630) Data frame received for 5\nI0519 11:08:40.173078 379 log.go:172] (0xc000606dc0) (5) Data frame handling\nI0519 11:08:40.173309 379 log.go:172] (0xc00013a630) Data frame received for 3\nI0519 11:08:40.173341 379 log.go:172] (0xc0007166e0) (3) Data frame handling\nI0519 11:08:40.173359 379 log.go:172] (0xc0007166e0) (3) Data frame sent\nI0519 11:08:40.173380 379 log.go:172] (0xc00013a630) Data frame received for 3\nI0519 11:08:40.173395 379 log.go:172] (0xc0007166e0) (3) Data frame handling\nI0519 11:08:40.174818 379 log.go:172] (0xc00013a630) Data frame received for 1\nI0519 11:08:40.174916 379 log.go:172] (0xc000716640) (1) Data frame handling\nI0519 11:08:40.174953 379 log.go:172] (0xc000716640) (1) Data frame sent\nI0519 11:08:40.175012 379 log.go:172] (0xc00013a630) (0xc000716640) Stream removed, broadcasting: 1\nI0519 11:08:40.175036 379 log.go:172] (0xc00013a630) Go away received\nI0519 11:08:40.175377 379 log.go:172] (0xc00013a630) (0xc000716640) Stream removed, broadcasting: 1\nI0519 11:08:40.175395 379 log.go:172] (0xc00013a630) (0xc0007166e0) Stream removed, broadcasting: 3\nI0519 11:08:40.175408 379 log.go:172] (0xc00013a630) (0xc000606dc0) Stream removed, broadcasting: 5\n" May 19 11:08:40.181: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 11:08:40.181: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 11:08:40.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 11:08:40.416: INFO: stderr: "I0519 11:08:40.304083 401 log.go:172] (0xc0008502c0) (0xc0005b5360) Create stream\nI0519 11:08:40.304164 401 log.go:172] (0xc0008502c0) (0xc0005b5360) Stream added, broadcasting: 1\nI0519 11:08:40.308871 401 log.go:172] (0xc0008502c0) Reply frame received for 1\nI0519 11:08:40.308911 401 log.go:172] (0xc0008502c0) (0xc0006ba000) Create stream\nI0519 11:08:40.308922 401 log.go:172] (0xc0008502c0) (0xc0006ba000) Stream added, broadcasting: 3\nI0519 11:08:40.310479 401 log.go:172] (0xc0008502c0) Reply frame received for 3\nI0519 11:08:40.310524 401 log.go:172] (0xc0008502c0) (0xc0002f2000) Create stream\nI0519 11:08:40.310542 401 log.go:172] (0xc0008502c0) (0xc0002f2000) Stream added, broadcasting: 5\nI0519 11:08:40.311439 401 log.go:172] (0xc0008502c0) Reply frame received for 5\nI0519 11:08:40.408766 401 log.go:172] (0xc0008502c0) Data frame received for 3\nI0519 11:08:40.408804 401 log.go:172] (0xc0006ba000) (3) Data frame handling\nI0519 11:08:40.408846 401 log.go:172] (0xc0008502c0) Data frame received for 5\nI0519 11:08:40.408863 401 log.go:172] (0xc0002f2000) (5) Data frame handling\nI0519 11:08:40.408921 401 log.go:172] (0xc0006ba000) (3) Data frame sent\nI0519 11:08:40.408935 401 log.go:172] (0xc0008502c0) Data frame received for 3\nI0519 11:08:40.408941 401 log.go:172] (0xc0006ba000) (3) Data frame handling\nI0519 11:08:40.411105 401 log.go:172] (0xc0008502c0) Data frame received for 1\nI0519 11:08:40.411141 401 log.go:172] (0xc0005b5360) (1) Data frame handling\nI0519 11:08:40.411162 401 log.go:172] (0xc0005b5360) (1) Data frame sent\nI0519 11:08:40.411181 401 log.go:172] (0xc0008502c0) (0xc0005b5360) Stream removed, broadcasting: 1\nI0519 11:08:40.411227 401 log.go:172] (0xc0008502c0) Go away received\nI0519 11:08:40.411376 401 log.go:172] (0xc0008502c0) (0xc0005b5360) Stream removed, broadcasting: 1\nI0519 11:08:40.411396 401 log.go:172] (0xc0008502c0) (0xc0006ba000) Stream removed, broadcasting: 3\nI0519 11:08:40.411406 401 log.go:172] (0xc0008502c0) (0xc0002f2000) Stream removed, broadcasting: 5\n" May 19 11:08:40.416: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 11:08:40.416: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 11:08:40.416: INFO: Waiting for statefulset status.replicas updated to 0 May 19 11:08:40.419: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 19 11:08:50.428: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 11:08:50.428: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 19 11:08:50.428: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 19 11:08:50.456: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:50.456: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:50.456: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:50.456: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:50.456: INFO: May 19 11:08:50.456: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 11:08:51.461: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:51.462: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:51.462: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:51.462: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:51.462: INFO: May 19 11:08:51.462: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 11:08:52.604: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:52.604: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:52.605: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:52.605: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:52.605: INFO: May 19 11:08:52.605: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 11:08:53.610: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:53.610: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:53.610: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:53.610: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:53.610: INFO: May 19 11:08:53.610: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 11:08:54.639: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:54.639: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:54.639: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:54.639: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:54.639: INFO: May 19 11:08:54.639: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 11:08:55.643: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:55.644: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:55.644: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:55.644: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:55.644: INFO: May 19 11:08:55.644: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 11:08:56.647: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:56.648: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:56.648: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:56.648: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:56.648: INFO: May 19 11:08:56.648: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 11:08:57.652: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:57.652: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:57.652: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:57.652: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:57.652: INFO: May 19 11:08:57.652: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 11:08:58.662: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:58.662: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:58.662: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:58.663: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:58.663: INFO: May 19 11:08:58.663: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 11:08:59.667: INFO: POD NODE PHASE GRACE CONDITIONS May 19 11:08:59.667: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:07:58 +0000 UTC }] May 19 11:08:59.667: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:59.667: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:08:18 +0000 UTC }] May 19 11:08:59.667: INFO: May 19 11:08:59.667: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-5c52r May 19 11:09:00.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:09:00.792: INFO: rc: 1 May 19 11:09:00.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00212af30 exit status 1 true [0xc0019142e0 0xc0019142f8 0xc001914310] [0xc0019142e0 0xc0019142f8 0xc001914310] [0xc0019142f0 0xc001914308] [0x935700 0x935700] 0xc001bab800 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 19 11:09:10.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:09:10.885: INFO: rc: 1 May 19 11:09:10.885: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e72750 exit status 1 true [0xc000491200 0xc000491218 0xc000491230] [0xc000491200 0xc000491218 0xc000491230] [0xc000491210 0xc000491228] [0x935700 0x935700] 0xc001f7daa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:09:20.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:09:20.977: INFO: rc: 1 May 19 11:09:20.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00212b080 exit status 1 true [0xc001914318 0xc001914330 0xc001914348] [0xc001914318 0xc001914330 0xc001914348] [0xc001914328 0xc001914340] [0x935700 0x935700] 0xc001babaa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:09:30.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:09:31.074: INFO: rc: 1 May 19 11:09:31.074: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ddfad0 exit status 1 true [0xc0013ce440 0xc0013ce458 0xc0013ce470] [0xc0013ce440 0xc0013ce458 0xc0013ce470] [0xc0013ce450 0xc0013ce468] [0x935700 0x935700] 0xc001b16a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:09:41.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:09:41.165: INFO: rc: 1 May 19 11:09:41.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00212b1a0 exit status 1 true [0xc001914350 0xc001914368 0xc001914380] [0xc001914350 0xc001914368 0xc001914380] [0xc001914360 0xc001914378] [0x935700 0x935700] 0xc001babd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:09:51.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:09:51.259: INFO: rc: 1 May 19 11:09:51.259: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b90120 exit status 1 true [0xc00000e168 0xc00000e228 0xc001914008] [0xc00000e168 0xc00000e228 0xc001914008] [0xc00000e1f8 0xc001914000] [0x935700 0x935700] 0xc001432540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:10:01.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:10:01.737: INFO: rc: 1 May 19 11:10:01.737: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d88120 exit status 1 true [0xc0004900c0 0xc000490158 0xc0004901d8] [0xc0004900c0 0xc000490158 0xc0004901d8] [0xc000490108 0xc0004901b0] [0x935700 0x935700] 0xc00155c960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:10:11.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:10:12.053: INFO: rc: 1 May 19 11:10:12.053: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d88240 exit status 1 true [0xc0004901f8 0xc0004902c8 0xc000490320] [0xc0004901f8 0xc0004902c8 0xc000490320] [0xc000490298 0xc000490318] [0x935700 0x935700] 0xc00155d080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:10:22.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:10:22.155: INFO: rc: 1 May 19 11:10:22.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b90270 exit status 1 true [0xc001914010 0xc001914028 0xc001914040] [0xc001914010 0xc001914028 0xc001914040] [0xc001914020 0xc001914038] [0x935700 0x935700] 0xc001432e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:10:32.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:10:32.262: INFO: rc: 1 May 19 11:10:32.262: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bb2120 exit status 1 true [0xc000ae4000 0xc000ae4018 0xc000ae4030] [0xc000ae4000 0xc000ae4018 0xc000ae4030] [0xc000ae4010 0xc000ae4028] [0x935700 0x935700] 0xc001b52360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:10:42.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:10:42.360: INFO: rc: 1 May 19 11:10:42.360: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee8120 exit status 1 true [0xc000c14000 0xc000c14018 0xc000c14030] [0xc000c14000 0xc000c14018 0xc000c14030] [0xc000c14010 0xc000c14028] [0x935700 0x935700] 0xc001795f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:10:52.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:10:52.466: INFO: rc: 1 May 19 11:10:52.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d883f0 exit status 1 true [0xc000490328 0xc0004903c8 0xc000490420] [0xc000490328 0xc0004903c8 0xc000490420] [0xc0004903a0 0xc000490418] [0x935700 0x935700] 0xc001a34a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:11:02.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:11:02.580: INFO: rc: 1 May 19 11:11:02.580: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee8240 exit status 1 true [0xc000c14038 0xc000c14050 0xc000c14068] [0xc000c14038 0xc000c14050 0xc000c14068] [0xc000c14048 0xc000c14060] [0x935700 0x935700] 0xc001b3e660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:11:12.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:11:12.675: INFO: rc: 1 May 19 11:11:12.675: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee8360 exit status 1 true [0xc000c14070 0xc000c14088 0xc000c140a0] [0xc000c14070 0xc000c14088 0xc000c140a0] [0xc000c14080 0xc000c14098] [0x935700 0x935700] 0xc001b3e9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:11:22.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:11:23.055: INFO: rc: 1 May 19 11:11:23.055: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d88540 exit status 1 true [0xc000490428 0xc000490440 0xc000490498] [0xc000490428 0xc000490440 0xc000490498] [0xc000490438 0xc000490480] [0x935700 0x935700] 0xc001a35320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:11:33.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:11:33.162: INFO: rc: 1 May 19 11:11:33.162: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b90420 exit status 1 true [0xc001914048 0xc001914060 0xc001914078] [0xc001914048 0xc001914060 0xc001914078] [0xc001914058 0xc001914070] [0x935700 0x935700] 0xc0014339e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:11:43.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:11:43.267: INFO: rc: 1 May 19 11:11:43.267: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d88660 exit status 1 true [0xc0004904a0 0xc000490520 0xc0004905a0] [0xc0004904a0 0xc000490520 0xc0004905a0] [0xc0004904f8 0xc000490558] [0x935700 0x935700] 0xc001a35980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:11:53.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:11:53.354: INFO: rc: 1 May 19 11:11:53.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001af4120 exit status 1 true [0xc001aae008 0xc001aae020 0xc001aae038] [0xc001aae008 0xc001aae020 0xc001aae038] [0xc001aae018 0xc001aae030] [0x935700 0x935700] 0xc00194c480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:12:03.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:12:03.455: INFO: rc: 1 May 19 11:12:03.455: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee8150 exit status 1 true [0xc00000e168 0xc00000e228 0xc001aae040] [0xc00000e168 0xc00000e228 0xc001aae040] [0xc00000e1f8 0xc00016e000] [0x935700 0x935700] 0xc001795f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:12:13.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:12:13.543: INFO: rc: 1 May 19 11:12:13.543: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001af43c0 exit status 1 true [0xc000c14000 0xc000c14018 0xc000c14030] [0xc000c14000 0xc000c14018 0xc000c14030] [0xc000c14010 0xc000c14028] [0x935700 0x935700] 0xc001b3e060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:12:23.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:12:23.652: INFO: rc: 1 May 19 11:12:23.653: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bb2150 exit status 1 true [0xc000ae4000 0xc000ae4018 0xc000ae4030] [0xc000ae4000 0xc000ae4018 0xc000ae4030] [0xc000ae4010 0xc000ae4028] [0x935700 0x935700] 0xc001b52360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:12:33.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:12:33.761: INFO: rc: 1 May 19 11:12:33.761: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bb2390 exit status 1 true [0xc000ae4038 0xc000ae4050 0xc000ae4068] [0xc000ae4038 0xc000ae4050 0xc000ae4068] [0xc000ae4048 0xc000ae4060] [0x935700 0x935700] 0xc001b52b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:12:43.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:12:43.869: INFO: rc: 1 May 19 11:12:43.869: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bb24e0 exit status 1 true [0xc000ae4070 0xc000ae4088 0xc000ae40a0] [0xc000ae4070 0xc000ae4088 0xc000ae40a0] [0xc000ae4080 0xc000ae4098] [0x935700 0x935700] 0xc001b53860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:12:53.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:12:53.961: INFO: rc: 1 May 19 11:12:53.961: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001af4540 exit status 1 true [0xc000c14038 0xc000c14050 0xc000c14068] [0xc000c14038 0xc000c14050 0xc000c14068] [0xc000c14048 0xc000c14060] [0x935700 0x935700] 0xc001b3e6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:13:03.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:13:04.054: INFO: rc: 1 May 19 11:13:04.054: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee82a0 exit status 1 true [0xc001aae048 0xc001aae060 0xc001aae078] [0xc001aae048 0xc001aae060 0xc001aae078] [0xc001aae058 0xc001aae070] [0x935700 0x935700] 0xc00155cc60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:13:14.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:13:14.149: INFO: rc: 1 May 19 11:13:14.150: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d88150 exit status 1 true [0xc0004900c0 0xc000490158 0xc0004901d8] [0xc0004900c0 0xc000490158 0xc0004901d8] [0xc000490108 0xc0004901b0] [0x935700 0x935700] 0xc001a34ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:13:24.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:13:24.245: INFO: rc: 1 May 19 11:13:24.245: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ee83f0 exit status 1 true [0xc001aae080 0xc001aae098 0xc001aae0b0] [0xc001aae080 0xc001aae098 0xc001aae0b0] [0xc001aae090 0xc001aae0a8] [0x935700 0x935700] 0xc00155d0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:13:34.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:13:34.336: INFO: rc: 1 May 19 11:13:34.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bb2630 exit status 1 true [0xc000ae40a8 0xc000ae40d8 0xc000ae4110] [0xc000ae40a8 0xc000ae40d8 0xc000ae4110] [0xc000ae40d0 0xc000ae40f0] [0x935700 0x935700] 0xc001432300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:13:44.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:13:44.438: INFO: rc: 1 May 19 11:13:44.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bb2780 exit status 1 true [0xc000ae4128 0xc000ae4148 0xc000ae4160] [0xc000ae4128 0xc000ae4148 0xc000ae4160] [0xc000ae4138 0xc000ae4158] [0x935700 0x935700] 0xc001432cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:13:54.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:13:54.543: INFO: rc: 1 May 19 11:13:54.543: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bb2120 exit status 1 true [0xc00000e168 0xc00000e228 0xc000ae4008] [0xc00000e168 0xc00000e228 0xc000ae4008] [0xc00000e1f8 0xc000ae4000] [0x935700 0x935700] 0xc00155c960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 19 11:14:04.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c52r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:14:04.642: INFO: rc: 1 May 19 11:14:04.642: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 19 11:14:04.642: INFO: Scaling statefulset ss to 0 May 19 11:14:04.651: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 19 11:14:04.652: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5c52r May 19 11:14:04.655: INFO: Scaling statefulset ss to 0 May 19 11:14:04.663: INFO: Waiting for statefulset status.replicas updated to 0 May 19 11:14:04.666: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:14:04.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5c52r" for this suite. May 19 11:14:10.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:14:10.878: INFO: namespace: e2e-tests-statefulset-5c52r, resource: bindings, ignored listing per whitelist May 19 11:14:10.889: INFO: namespace e2e-tests-statefulset-5c52r deletion completed in 6.2040509s • [SLOW TEST:372.340 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:14:10.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-dd5b08fe-99c1-11ea-abcb-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-dd5b08fe-99c1-11ea-abcb-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:15:25.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rp9f2" for this suite. May 19 11:15:47.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:15:47.454: INFO: namespace: e2e-tests-configmap-rp9f2, resource: bindings, ignored listing per whitelist May 19 11:15:47.478: INFO: namespace e2e-tests-configmap-rp9f2 deletion completed in 22.098060623s • [SLOW TEST:96.589 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:15:47.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 19 11:15:52.114: INFO: Successfully updated pod "pod-update-16ec6f07-99c2-11ea-abcb-0242ac110018" STEP: verifying the updated pod is in kubernetes May 19 11:15:52.125: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:15:52.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7qrkz" for this suite. May 19 11:16:14.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:16:14.222: INFO: namespace: e2e-tests-pods-7qrkz, resource: bindings, ignored listing per whitelist May 19 11:16:14.223: INFO: namespace e2e-tests-pods-7qrkz deletion completed in 22.094336916s • [SLOW TEST:26.745 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:16:14.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-8tsf6/configmap-test-26e2d414-99c2-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 11:16:14.344: INFO: Waiting up to 5m0s for pod "pod-configmaps-26e36a9b-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-configmap-8tsf6" to be "success or failure" May 19 11:16:14.369: INFO: Pod "pod-configmaps-26e36a9b-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.173233ms May 19 11:16:16.374: INFO: Pod "pod-configmaps-26e36a9b-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029374493s May 19 11:16:18.378: INFO: Pod "pod-configmaps-26e36a9b-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033468975s STEP: Saw pod success May 19 11:16:18.378: INFO: Pod "pod-configmaps-26e36a9b-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:16:18.381: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-26e36a9b-99c2-11ea-abcb-0242ac110018 container env-test: STEP: delete the pod May 19 11:16:18.449: INFO: Waiting for pod pod-configmaps-26e36a9b-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:16:18.504: INFO: Pod pod-configmaps-26e36a9b-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:16:18.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8tsf6" for this suite. May 19 11:16:24.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:16:24.534: INFO: namespace: e2e-tests-configmap-8tsf6, resource: bindings, ignored listing per whitelist May 19 11:16:24.596: INFO: namespace e2e-tests-configmap-8tsf6 deletion completed in 6.087268718s • [SLOW TEST:10.373 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:16:24.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:16:24.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d13bce9-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-x4bl4" to be "success or failure" May 19 11:16:24.751: INFO: Pod "downwardapi-volume-2d13bce9-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.482596ms May 19 11:16:26.756: INFO: Pod "downwardapi-volume-2d13bce9-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02986101s May 19 11:16:28.760: INFO: Pod "downwardapi-volume-2d13bce9-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034009352s STEP: Saw pod success May 19 11:16:28.760: INFO: Pod "downwardapi-volume-2d13bce9-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:16:28.763: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2d13bce9-99c2-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:16:28.795: INFO: Waiting for pod downwardapi-volume-2d13bce9-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:16:28.800: INFO: Pod downwardapi-volume-2d13bce9-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:16:28.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x4bl4" for this suite. May 19 11:16:34.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:16:34.866: INFO: namespace: e2e-tests-projected-x4bl4, resource: bindings, ignored listing per whitelist May 19 11:16:34.894: INFO: namespace e2e-tests-projected-x4bl4 deletion completed in 6.090490389s • [SLOW TEST:10.298 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:16:34.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-3332ab02-99c2-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 11:16:35.012: INFO: Waiting up to 5m0s for pod "pod-secrets-33351a57-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-secrets-6kglh" to be "success or failure" May 19 11:16:35.022: INFO: Pod "pod-secrets-33351a57-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.62501ms May 19 11:16:37.026: INFO: Pod "pod-secrets-33351a57-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014101876s May 19 11:16:39.031: INFO: Pod "pod-secrets-33351a57-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018933855s STEP: Saw pod success May 19 11:16:39.031: INFO: Pod "pod-secrets-33351a57-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:16:39.034: INFO: Trying to get logs from node hunter-worker pod pod-secrets-33351a57-99c2-11ea-abcb-0242ac110018 container secret-volume-test: STEP: delete the pod May 19 11:16:39.051: INFO: Waiting for pod pod-secrets-33351a57-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:16:39.055: INFO: Pod pod-secrets-33351a57-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:16:39.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6kglh" for this suite. May 19 11:16:45.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:16:45.118: INFO: namespace: e2e-tests-secrets-6kglh, resource: bindings, ignored listing per whitelist May 19 11:16:45.176: INFO: namespace e2e-tests-secrets-6kglh deletion completed in 6.118310039s • [SLOW TEST:10.282 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:16:45.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 19 11:16:49.358: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-395da101-99c2-11ea-abcb-0242ac110018,GenerateName:,Namespace:e2e-tests-events-fz5bc,SelfLink:/api/v1/namespaces/e2e-tests-events-fz5bc/pods/send-events-395da101-99c2-11ea-abcb-0242ac110018,UID:395e3270-99c2-11ea-99e8-0242ac110002,ResourceVersion:11389281,Generation:0,CreationTimestamp:2020-05-19 11:16:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 336408823,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zg2z2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zg2z2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-zg2z2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001dfac00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001dfac20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:16:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:16:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:16:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:16:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.170,StartTime:2020-05-19 11:16:45 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-19 11:16:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://517e504b8f8af48374d034227bfadefbe03aa044f01d18b9098eb53b9d855124}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 19 11:16:51.362: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 19 11:16:53.368: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:16:53.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-fz5bc" for this suite. May 19 11:17:31.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:17:31.487: INFO: namespace: e2e-tests-events-fz5bc, resource: bindings, ignored listing per whitelist May 19 11:17:31.552: INFO: namespace e2e-tests-events-fz5bc deletion completed in 38.158663921s • [SLOW TEST:46.376 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:17:31.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:17:31.720: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55024da2-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-4c95j" to be "success or failure" May 19 11:17:31.727: INFO: Pod "downwardapi-volume-55024da2-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.034695ms May 19 11:17:33.731: INFO: Pod "downwardapi-volume-55024da2-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011378714s May 19 11:17:35.784: INFO: Pod "downwardapi-volume-55024da2-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064133193s STEP: Saw pod success May 19 11:17:35.784: INFO: Pod "downwardapi-volume-55024da2-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:17:35.788: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-55024da2-99c2-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:17:35.860: INFO: Waiting for pod downwardapi-volume-55024da2-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:17:35.867: INFO: Pod downwardapi-volume-55024da2-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:17:35.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4c95j" for this suite. May 19 11:17:41.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:17:41.931: INFO: namespace: e2e-tests-projected-4c95j, resource: bindings, ignored listing per whitelist May 19 11:17:41.959: INFO: namespace e2e-tests-projected-4c95j deletion completed in 6.089172543s • [SLOW TEST:10.407 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:17:41.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-chdtw/configmap-test-5b2e4870-99c2-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 11:17:42.091: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b3080bc-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-configmap-chdtw" to be "success or failure" May 19 11:17:42.101: INFO: Pod "pod-configmaps-5b3080bc-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.992446ms May 19 11:17:44.105: INFO: Pod "pod-configmaps-5b3080bc-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014321661s May 19 11:17:46.110: INFO: Pod "pod-configmaps-5b3080bc-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018717143s STEP: Saw pod success May 19 11:17:46.110: INFO: Pod "pod-configmaps-5b3080bc-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:17:46.113: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-5b3080bc-99c2-11ea-abcb-0242ac110018 container env-test: STEP: delete the pod May 19 11:17:46.132: INFO: Waiting for pod pod-configmaps-5b3080bc-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:17:46.137: INFO: Pod pod-configmaps-5b3080bc-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:17:46.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-chdtw" for this suite. May 19 11:17:52.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:17:52.230: INFO: namespace: e2e-tests-configmap-chdtw, resource: bindings, ignored listing per whitelist May 19 11:17:52.266: INFO: namespace e2e-tests-configmap-chdtw deletion completed in 6.127048009s • [SLOW TEST:10.307 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:17:52.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 19 11:17:52.352: INFO: Waiting up to 5m0s for pod "pod-614e82ae-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-ftkqj" to be "success or failure" May 19 11:17:52.437: INFO: Pod "pod-614e82ae-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 85.151703ms May 19 11:17:54.442: INFO: Pod "pod-614e82ae-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090395104s May 19 11:17:56.446: INFO: Pod "pod-614e82ae-99c2-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.093919855s May 19 11:17:58.450: INFO: Pod "pod-614e82ae-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098183703s STEP: Saw pod success May 19 11:17:58.450: INFO: Pod "pod-614e82ae-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:17:58.453: INFO: Trying to get logs from node hunter-worker pod pod-614e82ae-99c2-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:17:58.486: INFO: Waiting for pod pod-614e82ae-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:17:58.494: INFO: Pod pod-614e82ae-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:17:58.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ftkqj" for this suite. May 19 11:18:04.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:18:04.563: INFO: namespace: e2e-tests-emptydir-ftkqj, resource: bindings, ignored listing per whitelist May 19 11:18:04.583: INFO: namespace e2e-tests-emptydir-ftkqj deletion completed in 6.08537262s • [SLOW TEST:12.316 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:18:04.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 19 11:18:04.668: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 11:18:04.688: INFO: Waiting for terminating namespaces to be deleted... May 19 11:18:04.691: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 19 11:18:04.696: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 19 11:18:04.696: INFO: Container kube-proxy ready: true, restart count 0 May 19 11:18:04.696: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 19 11:18:04.696: INFO: Container kindnet-cni ready: true, restart count 0 May 19 11:18:04.696: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 19 11:18:04.696: INFO: Container coredns ready: true, restart count 0 May 19 11:18:04.696: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 19 11:18:04.701: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 19 11:18:04.701: INFO: Container kube-proxy ready: true, restart count 0 May 19 11:18:04.701: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 19 11:18:04.701: INFO: Container kindnet-cni ready: true, restart count 0 May 19 11:18:04.701: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 19 11:18:04.701: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161069e145703f86], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:18:05.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nn2qj" for this suite. May 19 11:18:11.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:18:11.804: INFO: namespace: e2e-tests-sched-pred-nn2qj, resource: bindings, ignored listing per whitelist May 19 11:18:11.821: INFO: namespace e2e-tests-sched-pred-nn2qj deletion completed in 6.099634173s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.239 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:18:11.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:18:11.951: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 19 11:18:16.956: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 11:18:16.956: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 19 11:18:18.960: INFO: Creating deployment "test-rollover-deployment" May 19 11:18:18.970: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 19 11:18:20.976: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 19 11:18:20.982: INFO: Ensure that both replica sets have 1 created replica May 19 11:18:20.988: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 19 11:18:20.993: INFO: Updating deployment test-rollover-deployment May 19 11:18:20.993: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 19 11:18:23.022: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 19 11:18:23.028: INFO: Make sure deployment "test-rollover-deployment" is complete May 19 11:18:23.034: INFO: all replica sets need to contain the pod-template-hash label May 19 11:18:23.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483901, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 11:18:25.040: INFO: all replica sets need to contain the pod-template-hash label May 19 11:18:25.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483904, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 11:18:27.042: INFO: all replica sets need to contain the pod-template-hash label May 19 11:18:27.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483904, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 11:18:29.042: INFO: all replica sets need to contain the pod-template-hash label May 19 11:18:29.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483904, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 11:18:31.040: INFO: all replica sets need to contain the pod-template-hash label May 19 11:18:31.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483904, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 11:18:33.042: INFO: all replica sets need to contain the pod-template-hash label May 19 11:18:33.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483899, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483904, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725483898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 11:18:35.040: INFO: May 19 11:18:35.040: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 19 11:18:35.047: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-jnxmc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jnxmc/deployments/test-rollover-deployment,UID:712bd891-99c2-11ea-99e8-0242ac110002,ResourceVersion:11389658,Generation:2,CreationTimestamp:2020-05-19 11:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-19 11:18:19 +0000 UTC 2020-05-19 11:18:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-19 11:18:34 +0000 UTC 2020-05-19 11:18:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 19 11:18:35.050: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-jnxmc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jnxmc/replicasets/test-rollover-deployment-5b8479fdb6,UID:726204d3-99c2-11ea-99e8-0242ac110002,ResourceVersion:11389649,Generation:2,CreationTimestamp:2020-05-19 11:18:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 712bd891-99c2-11ea-99e8-0242ac110002 0xc002213fa7 0xc002213fa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 19 11:18:35.050: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 19 11:18:35.050: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-jnxmc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jnxmc/replicasets/test-rollover-controller,UID:6cfb381b-99c2-11ea-99e8-0242ac110002,ResourceVersion:11389657,Generation:2,CreationTimestamp:2020-05-19 11:18:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 712bd891-99c2-11ea-99e8-0242ac110002 0xc002213d7f 0xc002213d90}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 11:18:35.050: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-jnxmc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jnxmc/replicasets/test-rollover-deployment-58494b7559,UID:712e5f97-99c2-11ea-99e8-0242ac110002,ResourceVersion:11389612,Generation:2,CreationTimestamp:2020-05-19 11:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 712bd891-99c2-11ea-99e8-0242ac110002 0xc002213ed7 0xc002213ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 11:18:35.053: INFO: Pod "test-rollover-deployment-5b8479fdb6-ttqfs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-ttqfs,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-jnxmc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jnxmc/pods/test-rollover-deployment-5b8479fdb6-ttqfs,UID:727317c3-99c2-11ea-99e8-0242ac110002,ResourceVersion:11389627,Generation:0,CreationTimestamp:2020-05-19 11:18:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 726204d3-99c2-11ea-99e8-0242ac110002 0xc001fdb237 0xc001fdb238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4kftn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4kftn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4kftn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fdb2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fdb2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:18:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:18:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:18:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:18:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.145,StartTime:2020-05-19 11:18:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-19 11:18:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://cc767530e6b33182b2cbae07fd315d5640947a426b567a5a06db79439c9a365e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:18:35.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-jnxmc" for this suite. May 19 11:18:43.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:18:43.088: INFO: namespace: e2e-tests-deployment-jnxmc, resource: bindings, ignored listing per whitelist May 19 11:18:43.149: INFO: namespace e2e-tests-deployment-jnxmc deletion completed in 8.092948975s • [SLOW TEST:31.328 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:18:43.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:19:43.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-szwgk" for this suite. May 19 11:20:05.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:20:05.419: INFO: namespace: e2e-tests-container-probe-szwgk, resource: bindings, ignored listing per whitelist May 19 11:20:05.453: INFO: namespace e2e-tests-container-probe-szwgk deletion completed in 22.156755616s • [SLOW TEST:82.303 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:20:05.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:20:11.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tsz4v" for this suite. May 19 11:20:51.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:20:51.747: INFO: namespace: e2e-tests-kubelet-test-tsz4v, resource: bindings, ignored listing per whitelist May 19 11:20:51.747: INFO: namespace e2e-tests-kubelet-test-tsz4v deletion completed in 40.157637636s • [SLOW TEST:46.293 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:20:51.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 19 11:20:56.461: INFO: Pod pod-hostip-cc64c40a-99c2-11ea-abcb-0242ac110018 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:20:56.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wnp4f" for this suite. May 19 11:21:18.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:21:18.570: INFO: namespace: e2e-tests-pods-wnp4f, resource: bindings, ignored listing per whitelist May 19 11:21:18.590: INFO: namespace e2e-tests-pods-wnp4f deletion completed in 22.124901095s • [SLOW TEST:26.844 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:21:18.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 19 11:21:18.718: INFO: Waiting up to 5m0s for pod "pod-dc4d8adc-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-th6nr" to be "success or failure" May 19 11:21:18.722: INFO: Pod "pod-dc4d8adc-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.397686ms May 19 11:21:20.725: INFO: Pod "pod-dc4d8adc-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006751578s May 19 11:21:22.729: INFO: Pod "pod-dc4d8adc-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011218105s STEP: Saw pod success May 19 11:21:22.729: INFO: Pod "pod-dc4d8adc-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:21:22.732: INFO: Trying to get logs from node hunter-worker pod pod-dc4d8adc-99c2-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:21:22.767: INFO: Waiting for pod pod-dc4d8adc-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:21:22.775: INFO: Pod pod-dc4d8adc-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:21:22.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-th6nr" for this suite. May 19 11:21:28.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:21:28.872: INFO: namespace: e2e-tests-emptydir-th6nr, resource: bindings, ignored listing per whitelist May 19 11:21:28.879: INFO: namespace e2e-tests-emptydir-th6nr deletion completed in 6.099755506s • [SLOW TEST:10.288 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:21:28.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-e271f098-99c2-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 11:21:29.016: INFO: Waiting up to 5m0s for pod "pod-configmaps-e27276d7-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-configmap-p4hhs" to be "success or failure" May 19 11:21:29.057: INFO: Pod "pod-configmaps-e27276d7-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 41.025925ms May 19 11:21:31.062: INFO: Pod "pod-configmaps-e27276d7-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045531787s May 19 11:21:33.066: INFO: Pod "pod-configmaps-e27276d7-99c2-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.049733863s May 19 11:21:35.070: INFO: Pod "pod-configmaps-e27276d7-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054025946s STEP: Saw pod success May 19 11:21:35.070: INFO: Pod "pod-configmaps-e27276d7-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:21:35.073: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-e27276d7-99c2-11ea-abcb-0242ac110018 container configmap-volume-test: STEP: delete the pod May 19 11:21:35.110: INFO: Waiting for pod pod-configmaps-e27276d7-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:21:35.122: INFO: Pod pod-configmaps-e27276d7-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:21:35.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-p4hhs" for this suite. May 19 11:21:41.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:21:41.205: INFO: namespace: e2e-tests-configmap-p4hhs, resource: bindings, ignored listing per whitelist May 19 11:21:41.214: INFO: namespace e2e-tests-configmap-p4hhs deletion completed in 6.087411902s • [SLOW TEST:12.335 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:21:41.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 19 11:21:41.317: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix465713886/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:21:41.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mqk5k" for this suite. May 19 11:21:47.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:21:47.466: INFO: namespace: e2e-tests-kubectl-mqk5k, resource: bindings, ignored listing per whitelist May 19 11:21:47.512: INFO: namespace e2e-tests-kubectl-mqk5k deletion completed in 6.083553348s • [SLOW TEST:6.298 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:21:47.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 19 11:21:47.665: INFO: Waiting up to 5m0s for pod "downward-api-ed8ba0a2-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-c8z29" to be "success or failure" May 19 11:21:47.674: INFO: Pod "downward-api-ed8ba0a2-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.711479ms May 19 11:21:49.678: INFO: Pod "downward-api-ed8ba0a2-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012469343s May 19 11:21:51.682: INFO: Pod "downward-api-ed8ba0a2-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016600201s STEP: Saw pod success May 19 11:21:51.682: INFO: Pod "downward-api-ed8ba0a2-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:21:51.684: INFO: Trying to get logs from node hunter-worker pod downward-api-ed8ba0a2-99c2-11ea-abcb-0242ac110018 container dapi-container: STEP: delete the pod May 19 11:21:51.712: INFO: Waiting for pod downward-api-ed8ba0a2-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:21:51.740: INFO: Pod downward-api-ed8ba0a2-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:21:51.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c8z29" for this suite. May 19 11:21:57.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:21:57.792: INFO: namespace: e2e-tests-downward-api-c8z29, resource: bindings, ignored listing per whitelist May 19 11:21:57.830: INFO: namespace e2e-tests-downward-api-c8z29 deletion completed in 6.085112121s • [SLOW TEST:10.317 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:21:57.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 19 11:21:57.985: INFO: Waiting up to 5m0s for pod "pod-f3aebff5-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-knrbd" to be "success or failure" May 19 11:21:57.988: INFO: Pod "pod-f3aebff5-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.826732ms May 19 11:21:59.992: INFO: Pod "pod-f3aebff5-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006559515s May 19 11:22:02.159: INFO: Pod "pod-f3aebff5-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173669863s STEP: Saw pod success May 19 11:22:02.159: INFO: Pod "pod-f3aebff5-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:22:02.162: INFO: Trying to get logs from node hunter-worker2 pod pod-f3aebff5-99c2-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:22:02.185: INFO: Waiting for pod pod-f3aebff5-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:22:02.188: INFO: Pod pod-f3aebff5-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:22:02.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-knrbd" for this suite. May 19 11:22:08.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:22:08.270: INFO: namespace: e2e-tests-emptydir-knrbd, resource: bindings, ignored listing per whitelist May 19 11:22:08.277: INFO: namespace e2e-tests-emptydir-knrbd deletion completed in 6.085453417s • [SLOW TEST:10.446 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:22:08.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f9f057fb-99c2-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 11:22:08.443: INFO: Waiting up to 5m0s for pod "pod-secrets-f9f286c3-99c2-11ea-abcb-0242ac110018" in namespace "e2e-tests-secrets-fxf7w" to be "success or failure" May 19 11:22:08.446: INFO: Pod "pod-secrets-f9f286c3-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.4631ms May 19 11:22:10.450: INFO: Pod "pod-secrets-f9f286c3-99c2-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007287668s May 19 11:22:12.455: INFO: Pod "pod-secrets-f9f286c3-99c2-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011624675s STEP: Saw pod success May 19 11:22:12.455: INFO: Pod "pod-secrets-f9f286c3-99c2-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:22:12.457: INFO: Trying to get logs from node hunter-worker pod pod-secrets-f9f286c3-99c2-11ea-abcb-0242ac110018 container secret-volume-test: STEP: delete the pod May 19 11:22:12.472: INFO: Waiting for pod pod-secrets-f9f286c3-99c2-11ea-abcb-0242ac110018 to disappear May 19 11:22:12.494: INFO: Pod pod-secrets-f9f286c3-99c2-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:22:12.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fxf7w" for this suite. May 19 11:22:18.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:22:18.600: INFO: namespace: e2e-tests-secrets-fxf7w, resource: bindings, ignored listing per whitelist May 19 11:22:18.620: INFO: namespace e2e-tests-secrets-fxf7w deletion completed in 6.122196694s • [SLOW TEST:10.343 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:22:18.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-z96rr STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 11:22:18.742: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 11:22:54.170: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.150:8080/dial?request=hostName&protocol=udp&host=10.244.1.149&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-z96rr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:22:54.170: INFO: >>> kubeConfig: /root/.kube/config I0519 11:22:54.206728 6 log.go:172] (0xc00244e420) (0xc001e7dcc0) Create stream I0519 11:22:54.206760 6 log.go:172] (0xc00244e420) (0xc001e7dcc0) Stream added, broadcasting: 1 I0519 11:22:54.208781 6 log.go:172] (0xc00244e420) Reply frame received for 1 I0519 11:22:54.208829 6 log.go:172] (0xc00244e420) (0xc001b8e960) Create stream I0519 11:22:54.208842 6 log.go:172] (0xc00244e420) (0xc001b8e960) Stream added, broadcasting: 3 I0519 11:22:54.209939 6 log.go:172] (0xc00244e420) Reply frame received for 3 I0519 11:22:54.209985 6 log.go:172] (0xc00244e420) (0xc001b8ea00) Create stream I0519 11:22:54.210001 6 log.go:172] (0xc00244e420) (0xc001b8ea00) Stream added, broadcasting: 5 I0519 11:22:54.210904 6 log.go:172] (0xc00244e420) Reply frame received for 5 I0519 11:22:54.315263 6 log.go:172] (0xc00244e420) Data frame received for 3 I0519 11:22:54.315298 6 log.go:172] (0xc001b8e960) (3) Data frame handling I0519 11:22:54.315321 6 log.go:172] (0xc001b8e960) (3) Data frame sent I0519 11:22:54.315539 6 log.go:172] (0xc00244e420) Data frame received for 3 I0519 11:22:54.315568 6 log.go:172] (0xc001b8e960) (3) Data frame handling I0519 11:22:54.315609 6 log.go:172] (0xc00244e420) Data frame received for 5 I0519 11:22:54.315626 6 log.go:172] (0xc001b8ea00) (5) Data frame handling I0519 11:22:54.317922 6 log.go:172] (0xc00244e420) Data frame received for 1 I0519 11:22:54.317952 6 log.go:172] (0xc001e7dcc0) (1) Data frame handling I0519 11:22:54.317993 6 log.go:172] (0xc001e7dcc0) (1) Data frame sent I0519 11:22:54.318020 6 log.go:172] (0xc00244e420) (0xc001e7dcc0) Stream removed, broadcasting: 1 I0519 11:22:54.318037 6 log.go:172] (0xc00244e420) Go away received I0519 11:22:54.318205 6 log.go:172] (0xc00244e420) (0xc001e7dcc0) Stream removed, broadcasting: 1 I0519 11:22:54.318260 6 log.go:172] (0xc00244e420) (0xc001b8e960) Stream removed, broadcasting: 3 I0519 11:22:54.318284 6 log.go:172] (0xc00244e420) (0xc001b8ea00) Stream removed, broadcasting: 5 May 19 11:22:54.318: INFO: Waiting for endpoints: map[] May 19 11:22:54.321: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.150:8080/dial?request=hostName&protocol=udp&host=10.244.2.178&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-z96rr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:22:54.321: INFO: >>> kubeConfig: /root/.kube/config I0519 11:22:54.348759 6 log.go:172] (0xc001e902c0) (0xc001b8ef00) Create stream I0519 11:22:54.348786 6 log.go:172] (0xc001e902c0) (0xc001b8ef00) Stream added, broadcasting: 1 I0519 11:22:54.350892 6 log.go:172] (0xc001e902c0) Reply frame received for 1 I0519 11:22:54.350920 6 log.go:172] (0xc001e902c0) (0xc001b8efa0) Create stream I0519 11:22:54.350928 6 log.go:172] (0xc001e902c0) (0xc001b8efa0) Stream added, broadcasting: 3 I0519 11:22:54.351753 6 log.go:172] (0xc001e902c0) Reply frame received for 3 I0519 11:22:54.351782 6 log.go:172] (0xc001e902c0) (0xc001d323c0) Create stream I0519 11:22:54.351793 6 log.go:172] (0xc001e902c0) (0xc001d323c0) Stream added, broadcasting: 5 I0519 11:22:54.352775 6 log.go:172] (0xc001e902c0) Reply frame received for 5 I0519 11:22:54.424583 6 log.go:172] (0xc001e902c0) Data frame received for 3 I0519 11:22:54.424604 6 log.go:172] (0xc001b8efa0) (3) Data frame handling I0519 11:22:54.424616 6 log.go:172] (0xc001b8efa0) (3) Data frame sent I0519 11:22:54.425608 6 log.go:172] (0xc001e902c0) Data frame received for 3 I0519 11:22:54.425652 6 log.go:172] (0xc001b8efa0) (3) Data frame handling I0519 11:22:54.425723 6 log.go:172] (0xc001e902c0) Data frame received for 5 I0519 11:22:54.425739 6 log.go:172] (0xc001d323c0) (5) Data frame handling I0519 11:22:54.427636 6 log.go:172] (0xc001e902c0) Data frame received for 1 I0519 11:22:54.427663 6 log.go:172] (0xc001b8ef00) (1) Data frame handling I0519 11:22:54.427695 6 log.go:172] (0xc001b8ef00) (1) Data frame sent I0519 11:22:54.427740 6 log.go:172] (0xc001e902c0) (0xc001b8ef00) Stream removed, broadcasting: 1 I0519 11:22:54.427767 6 log.go:172] (0xc001e902c0) Go away received I0519 11:22:54.427847 6 log.go:172] (0xc001e902c0) (0xc001b8ef00) Stream removed, broadcasting: 1 I0519 11:22:54.427861 6 log.go:172] (0xc001e902c0) (0xc001b8efa0) Stream removed, broadcasting: 3 I0519 11:22:54.427868 6 log.go:172] (0xc001e902c0) (0xc001d323c0) Stream removed, broadcasting: 5 May 19 11:22:54.427: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:22:54.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-z96rr" for this suite. May 19 11:23:18.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:23:18.495: INFO: namespace: e2e-tests-pod-network-test-z96rr, resource: bindings, ignored listing per whitelist May 19 11:23:18.512: INFO: namespace e2e-tests-pod-network-test-z96rr deletion completed in 24.080617019s • [SLOW TEST:59.892 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:23:18.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 19 11:23:18.689: INFO: Waiting up to 5m0s for pod "pod-23ceb114-99c3-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-g8rjp" to be "success or failure" May 19 11:23:18.700: INFO: Pod "pod-23ceb114-99c3-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.732477ms May 19 11:23:20.777: INFO: Pod "pod-23ceb114-99c3-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087746915s May 19 11:23:22.849: INFO: Pod "pod-23ceb114-99c3-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159724857s STEP: Saw pod success May 19 11:23:22.849: INFO: Pod "pod-23ceb114-99c3-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:23:22.851: INFO: Trying to get logs from node hunter-worker2 pod pod-23ceb114-99c3-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:23:22.875: INFO: Waiting for pod pod-23ceb114-99c3-11ea-abcb-0242ac110018 to disappear May 19 11:23:22.885: INFO: Pod pod-23ceb114-99c3-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:23:22.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-g8rjp" for this suite. May 19 11:23:30.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:23:31.040: INFO: namespace: e2e-tests-emptydir-g8rjp, resource: bindings, ignored listing per whitelist May 19 11:23:31.234: INFO: namespace e2e-tests-emptydir-g8rjp deletion completed in 8.345211025s • [SLOW TEST:12.721 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:23:31.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 19 11:23:42.986: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:23:43.096: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:23:45.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:23:45.101: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:23:47.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:23:47.101: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:23:49.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:23:49.100: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:23:51.096: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:23:51.100: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:23:53.096: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:23:53.101: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:23:55.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:23:55.101: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:23:57.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:23:57.101: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:23:59.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:23:59.101: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:24:01.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:24:01.100: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:24:03.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:24:03.100: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:24:05.096: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:24:05.102: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:24:07.096: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:24:07.100: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:24:09.096: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:24:09.101: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:24:11.096: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:24:11.100: INFO: Pod pod-with-prestop-exec-hook still exists May 19 11:24:13.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 11:24:13.101: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:24:13.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-sjtxz" for this suite. May 19 11:24:37.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:24:37.172: INFO: namespace: e2e-tests-container-lifecycle-hook-sjtxz, resource: bindings, ignored listing per whitelist May 19 11:24:37.208: INFO: namespace e2e-tests-container-lifecycle-hook-sjtxz deletion completed in 24.095184555s • [SLOW TEST:65.974 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:24:37.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-cnfq STEP: Creating a pod to test atomic-volume-subpath May 19 11:24:37.516: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cnfq" in namespace "e2e-tests-subpath-87cb5" to be "success or failure" May 19 11:24:37.545: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Pending", Reason="", readiness=false. Elapsed: 29.719096ms May 19 11:24:39.652: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13632727s May 19 11:24:41.656: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140179253s May 19 11:24:43.742: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226958213s May 19 11:24:45.747: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Running", Reason="", readiness=false. Elapsed: 8.231146791s May 19 11:24:47.751: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Running", Reason="", readiness=false. Elapsed: 10.235495708s May 19 11:24:49.755: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Running", Reason="", readiness=false. Elapsed: 12.239915717s May 19 11:24:51.760: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Running", Reason="", readiness=false. Elapsed: 14.244661471s May 19 11:24:53.765: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Running", Reason="", readiness=false. Elapsed: 16.249659467s May 19 11:24:55.770: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Running", Reason="", readiness=false. Elapsed: 18.254382229s May 19 11:24:57.774: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Running", Reason="", readiness=false. Elapsed: 20.25883083s May 19 11:24:59.778: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Running", Reason="", readiness=false. Elapsed: 22.262601193s May 19 11:25:01.782: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Running", Reason="", readiness=false. Elapsed: 24.266912494s May 19 11:25:03.786: INFO: Pod "pod-subpath-test-configmap-cnfq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.270662743s STEP: Saw pod success May 19 11:25:03.786: INFO: Pod "pod-subpath-test-configmap-cnfq" satisfied condition "success or failure" May 19 11:25:03.788: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-cnfq container test-container-subpath-configmap-cnfq: STEP: delete the pod May 19 11:25:03.819: INFO: Waiting for pod pod-subpath-test-configmap-cnfq to disappear May 19 11:25:03.827: INFO: Pod pod-subpath-test-configmap-cnfq no longer exists STEP: Deleting pod pod-subpath-test-configmap-cnfq May 19 11:25:03.827: INFO: Deleting pod "pod-subpath-test-configmap-cnfq" in namespace "e2e-tests-subpath-87cb5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:25:03.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-87cb5" for this suite. May 19 11:25:09.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:25:09.901: INFO: namespace: e2e-tests-subpath-87cb5, resource: bindings, ignored listing per whitelist May 19 11:25:09.917: INFO: namespace e2e-tests-subpath-87cb5 deletion completed in 6.084013689s • [SLOW TEST:32.709 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:25:09.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6629ec68-99c3-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 11:25:10.106: INFO: Waiting up to 5m0s for pod "pod-secrets-663a3da4-99c3-11ea-abcb-0242ac110018" in namespace "e2e-tests-secrets-kpwbk" to be "success or failure" May 19 11:25:10.110: INFO: Pod "pod-secrets-663a3da4-99c3-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.914289ms May 19 11:25:12.114: INFO: Pod "pod-secrets-663a3da4-99c3-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007673646s May 19 11:25:14.117: INFO: Pod "pod-secrets-663a3da4-99c3-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011030642s STEP: Saw pod success May 19 11:25:14.117: INFO: Pod "pod-secrets-663a3da4-99c3-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:25:14.122: INFO: Trying to get logs from node hunter-worker pod pod-secrets-663a3da4-99c3-11ea-abcb-0242ac110018 container secret-volume-test: STEP: delete the pod May 19 11:25:14.135: INFO: Waiting for pod pod-secrets-663a3da4-99c3-11ea-abcb-0242ac110018 to disappear May 19 11:25:14.154: INFO: Pod pod-secrets-663a3da4-99c3-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:25:14.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kpwbk" for this suite. May 19 11:25:20.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:25:20.225: INFO: namespace: e2e-tests-secrets-kpwbk, resource: bindings, ignored listing per whitelist May 19 11:25:20.269: INFO: namespace e2e-tests-secrets-kpwbk deletion completed in 6.111534302s STEP: Destroying namespace "e2e-tests-secret-namespace-wnqfw" for this suite. May 19 11:25:26.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:25:26.344: INFO: namespace: e2e-tests-secret-namespace-wnqfw, resource: bindings, ignored listing per whitelist May 19 11:25:26.362: INFO: namespace e2e-tests-secret-namespace-wnqfw deletion completed in 6.092802778s • [SLOW TEST:16.444 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:25:26.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:25:30.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-p5zcj" for this suite. May 19 11:26:12.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:26:12.557: INFO: namespace: e2e-tests-kubelet-test-p5zcj, resource: bindings, ignored listing per whitelist May 19 11:26:12.576: INFO: namespace e2e-tests-kubelet-test-p5zcj deletion completed in 42.093203515s • [SLOW TEST:46.214 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:26:12.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 19 11:26:12.687: INFO: namespace e2e-tests-kubectl-tkvqc May 19 11:26:12.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tkvqc' May 19 11:26:15.450: INFO: stderr: "" May 19 11:26:15.450: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 19 11:26:16.455: INFO: Selector matched 1 pods for map[app:redis] May 19 11:26:16.455: INFO: Found 0 / 1 May 19 11:26:17.500: INFO: Selector matched 1 pods for map[app:redis] May 19 11:26:17.500: INFO: Found 0 / 1 May 19 11:26:18.455: INFO: Selector matched 1 pods for map[app:redis] May 19 11:26:18.455: INFO: Found 0 / 1 May 19 11:26:19.455: INFO: Selector matched 1 pods for map[app:redis] May 19 11:26:19.455: INFO: Found 1 / 1 May 19 11:26:19.455: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 11:26:19.460: INFO: Selector matched 1 pods for map[app:redis] May 19 11:26:19.460: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 11:26:19.460: INFO: wait on redis-master startup in e2e-tests-kubectl-tkvqc May 19 11:26:19.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j7c25 redis-master --namespace=e2e-tests-kubectl-tkvqc' May 19 11:26:19.587: INFO: stderr: "" May 19 11:26:19.587: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 May 11:26:18.510 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 May 11:26:18.510 # Server started, Redis version 3.2.12\n1:M 19 May 11:26:18.510 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 May 11:26:18.510 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 19 11:26:19.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-tkvqc' May 19 11:26:19.769: INFO: stderr: "" May 19 11:26:19.769: INFO: stdout: "service/rm2 exposed\n" May 19 11:26:19.781: INFO: Service rm2 in namespace e2e-tests-kubectl-tkvqc found. STEP: exposing service May 19 11:26:21.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-tkvqc' May 19 11:26:21.947: INFO: stderr: "" May 19 11:26:21.947: INFO: stdout: "service/rm3 exposed\n" May 19 11:26:21.956: INFO: Service rm3 in namespace e2e-tests-kubectl-tkvqc found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:26:23.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tkvqc" for this suite. May 19 11:26:45.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:26:46.027: INFO: namespace: e2e-tests-kubectl-tkvqc, resource: bindings, ignored listing per whitelist May 19 11:26:46.051: INFO: namespace e2e-tests-kubectl-tkvqc deletion completed in 22.084588763s • [SLOW TEST:33.474 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:26:46.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 19 11:26:50.723: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9f7c6b3c-99c3-11ea-abcb-0242ac110018" May 19 11:26:50.723: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9f7c6b3c-99c3-11ea-abcb-0242ac110018" in namespace "e2e-tests-pods-8xxfs" to be "terminated due to deadline exceeded" May 19 11:26:50.744: INFO: Pod "pod-update-activedeadlineseconds-9f7c6b3c-99c3-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 20.868923ms May 19 11:26:52.747: INFO: Pod "pod-update-activedeadlineseconds-9f7c6b3c-99c3-11ea-abcb-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.024708718s May 19 11:26:52.747: INFO: Pod "pod-update-activedeadlineseconds-9f7c6b3c-99c3-11ea-abcb-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:26:52.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8xxfs" for this suite. May 19 11:26:58.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:26:58.778: INFO: namespace: e2e-tests-pods-8xxfs, resource: bindings, ignored listing per whitelist May 19 11:26:58.841: INFO: namespace e2e-tests-pods-8xxfs deletion completed in 6.089389983s • [SLOW TEST:12.790 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:26:58.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-l6jlq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l6jlq to expose endpoints map[] May 19 11:26:58.987: INFO: Get endpoints failed (3.828387ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 19 11:26:59.992: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l6jlq exposes endpoints map[] (1.008726814s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-l6jlq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l6jlq to expose endpoints map[pod1:[80]] May 19 11:27:04.064: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l6jlq exposes endpoints map[pod1:[80]] (4.064115803s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-l6jlq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l6jlq to expose endpoints map[pod1:[80] pod2:[80]] May 19 11:27:08.153: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l6jlq exposes endpoints map[pod1:[80] pod2:[80]] (4.084889647s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-l6jlq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l6jlq to expose endpoints map[pod2:[80]] May 19 11:27:09.178: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l6jlq exposes endpoints map[pod2:[80]] (1.018795139s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-l6jlq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l6jlq to expose endpoints map[] May 19 11:27:10.521: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l6jlq exposes endpoints map[] (1.338558305s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:27:10.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-l6jlq" for this suite. May 19 11:27:34.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:27:34.720: INFO: namespace: e2e-tests-services-l6jlq, resource: bindings, ignored listing per whitelist May 19 11:27:34.794: INFO: namespace e2e-tests-services-l6jlq deletion completed in 24.175259504s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:35.953 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:27:34.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cbp9d STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 11:27:34.919: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 11:28:03.035: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.183:8080/dial?request=hostName&protocol=http&host=10.244.2.182&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-cbp9d PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:28:03.035: INFO: >>> kubeConfig: /root/.kube/config I0519 11:28:03.071429 6 log.go:172] (0xc0012a2370) (0xc000a8f720) Create stream I0519 11:28:03.071458 6 log.go:172] (0xc0012a2370) (0xc000a8f720) Stream added, broadcasting: 1 I0519 11:28:03.073437 6 log.go:172] (0xc0012a2370) Reply frame received for 1 I0519 11:28:03.073494 6 log.go:172] (0xc0012a2370) (0xc00142f540) Create stream I0519 11:28:03.073531 6 log.go:172] (0xc0012a2370) (0xc00142f540) Stream added, broadcasting: 3 I0519 11:28:03.074493 6 log.go:172] (0xc0012a2370) Reply frame received for 3 I0519 11:28:03.074516 6 log.go:172] (0xc0012a2370) (0xc000a8f7c0) Create stream I0519 11:28:03.074530 6 log.go:172] (0xc0012a2370) (0xc000a8f7c0) Stream added, broadcasting: 5 I0519 11:28:03.075310 6 log.go:172] (0xc0012a2370) Reply frame received for 5 I0519 11:28:03.211042 6 log.go:172] (0xc0012a2370) Data frame received for 3 I0519 11:28:03.211148 6 log.go:172] (0xc00142f540) (3) Data frame handling I0519 11:28:03.211198 6 log.go:172] (0xc00142f540) (3) Data frame sent I0519 11:28:03.211269 6 log.go:172] (0xc0012a2370) Data frame received for 3 I0519 11:28:03.211290 6 log.go:172] (0xc00142f540) (3) Data frame handling I0519 11:28:03.211764 6 log.go:172] (0xc0012a2370) Data frame received for 5 I0519 11:28:03.211790 6 log.go:172] (0xc000a8f7c0) (5) Data frame handling I0519 11:28:03.214003 6 log.go:172] (0xc0012a2370) Data frame received for 1 I0519 11:28:03.214037 6 log.go:172] (0xc000a8f720) (1) Data frame handling I0519 11:28:03.214069 6 log.go:172] (0xc000a8f720) (1) Data frame sent I0519 11:28:03.214090 6 log.go:172] (0xc0012a2370) (0xc000a8f720) Stream removed, broadcasting: 1 I0519 11:28:03.214116 6 log.go:172] (0xc0012a2370) Go away received I0519 11:28:03.214230 6 log.go:172] (0xc0012a2370) (0xc000a8f720) Stream removed, broadcasting: 1 I0519 11:28:03.214276 6 log.go:172] (0xc0012a2370) (0xc00142f540) Stream removed, broadcasting: 3 I0519 11:28:03.214312 6 log.go:172] (0xc0012a2370) (0xc000a8f7c0) Stream removed, broadcasting: 5 May 19 11:28:03.214: INFO: Waiting for endpoints: map[] May 19 11:28:03.221: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.183:8080/dial?request=hostName&protocol=http&host=10.244.1.158&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-cbp9d PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:28:03.221: INFO: >>> kubeConfig: /root/.kube/config I0519 11:28:03.250487 6 log.go:172] (0xc0012a2840) (0xc000a8fc20) Create stream I0519 11:28:03.250510 6 log.go:172] (0xc0012a2840) (0xc000a8fc20) Stream added, broadcasting: 1 I0519 11:28:03.252473 6 log.go:172] (0xc0012a2840) Reply frame received for 1 I0519 11:28:03.252506 6 log.go:172] (0xc0012a2840) (0xc000a8fcc0) Create stream I0519 11:28:03.252517 6 log.go:172] (0xc0012a2840) (0xc000a8fcc0) Stream added, broadcasting: 3 I0519 11:28:03.253575 6 log.go:172] (0xc0012a2840) Reply frame received for 3 I0519 11:28:03.253610 6 log.go:172] (0xc0012a2840) (0xc001e7c500) Create stream I0519 11:28:03.253622 6 log.go:172] (0xc0012a2840) (0xc001e7c500) Stream added, broadcasting: 5 I0519 11:28:03.254712 6 log.go:172] (0xc0012a2840) Reply frame received for 5 I0519 11:28:03.318728 6 log.go:172] (0xc0012a2840) Data frame received for 3 I0519 11:28:03.318777 6 log.go:172] (0xc000a8fcc0) (3) Data frame handling I0519 11:28:03.318802 6 log.go:172] (0xc000a8fcc0) (3) Data frame sent I0519 11:28:03.319115 6 log.go:172] (0xc0012a2840) Data frame received for 3 I0519 11:28:03.319138 6 log.go:172] (0xc000a8fcc0) (3) Data frame handling I0519 11:28:03.319276 6 log.go:172] (0xc0012a2840) Data frame received for 5 I0519 11:28:03.319287 6 log.go:172] (0xc001e7c500) (5) Data frame handling I0519 11:28:03.320785 6 log.go:172] (0xc0012a2840) Data frame received for 1 I0519 11:28:03.320801 6 log.go:172] (0xc000a8fc20) (1) Data frame handling I0519 11:28:03.320809 6 log.go:172] (0xc000a8fc20) (1) Data frame sent I0519 11:28:03.320819 6 log.go:172] (0xc0012a2840) (0xc000a8fc20) Stream removed, broadcasting: 1 I0519 11:28:03.320850 6 log.go:172] (0xc0012a2840) Go away received I0519 11:28:03.320905 6 log.go:172] (0xc0012a2840) (0xc000a8fc20) Stream removed, broadcasting: 1 I0519 11:28:03.320917 6 log.go:172] (0xc0012a2840) (0xc000a8fcc0) Stream removed, broadcasting: 3 I0519 11:28:03.320926 6 log.go:172] (0xc0012a2840) (0xc001e7c500) Stream removed, broadcasting: 5 May 19 11:28:03.320: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:28:03.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-cbp9d" for this suite. May 19 11:28:17.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:28:17.397: INFO: namespace: e2e-tests-pod-network-test-cbp9d, resource: bindings, ignored listing per whitelist May 19 11:28:17.449: INFO: namespace e2e-tests-pod-network-test-cbp9d deletion completed in 14.123724414s • [SLOW TEST:42.654 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:28:17.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-d5ff6387-99c3-11ea-abcb-0242ac110018 STEP: Creating secret with name s-test-opt-upd-d5ff6452-99c3-11ea-abcb-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d5ff6387-99c3-11ea-abcb-0242ac110018 STEP: Updating secret s-test-opt-upd-d5ff6452-99c3-11ea-abcb-0242ac110018 STEP: Creating secret with name s-test-opt-create-d5ff648f-99c3-11ea-abcb-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:28:25.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dqfdl" for this suite. May 19 11:28:47.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:28:47.914: INFO: namespace: e2e-tests-secrets-dqfdl, resource: bindings, ignored listing per whitelist May 19 11:28:47.976: INFO: namespace e2e-tests-secrets-dqfdl deletion completed in 22.10546271s • [SLOW TEST:30.528 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:28:47.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-e82b5a5d-99c3-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 11:28:48.671: INFO: Waiting up to 5m0s for pod "pod-configmaps-e83be64d-99c3-11ea-abcb-0242ac110018" in namespace "e2e-tests-configmap-dphkl" to be "success or failure" May 19 11:28:48.702: INFO: Pod "pod-configmaps-e83be64d-99c3-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.180679ms May 19 11:28:50.706: INFO: Pod "pod-configmaps-e83be64d-99c3-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034474326s May 19 11:28:52.710: INFO: Pod "pod-configmaps-e83be64d-99c3-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.038710704s May 19 11:28:54.714: INFO: Pod "pod-configmaps-e83be64d-99c3-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042139842s STEP: Saw pod success May 19 11:28:54.714: INFO: Pod "pod-configmaps-e83be64d-99c3-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:28:54.716: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e83be64d-99c3-11ea-abcb-0242ac110018 container configmap-volume-test: STEP: delete the pod May 19 11:28:54.740: INFO: Waiting for pod pod-configmaps-e83be64d-99c3-11ea-abcb-0242ac110018 to disappear May 19 11:28:54.742: INFO: Pod pod-configmaps-e83be64d-99c3-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:28:54.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dphkl" for this suite. May 19 11:29:00.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:29:00.803: INFO: namespace: e2e-tests-configmap-dphkl, resource: bindings, ignored listing per whitelist May 19 11:29:00.837: INFO: namespace e2e-tests-configmap-dphkl deletion completed in 6.092620986s • [SLOW TEST:12.860 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:29:00.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 19 11:29:00.985: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-d8ssr" to be "success or failure" May 19 11:29:00.996: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.815027ms May 19 11:29:03.000: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015152666s May 19 11:29:05.004: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01837237s May 19 11:29:07.008: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022803767s STEP: Saw pod success May 19 11:29:07.008: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 19 11:29:07.012: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 19 11:29:07.051: INFO: Waiting for pod pod-host-path-test to disappear May 19 11:29:07.084: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:29:07.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-d8ssr" for this suite. May 19 11:29:13.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:29:13.123: INFO: namespace: e2e-tests-hostpath-d8ssr, resource: bindings, ignored listing per whitelist May 19 11:29:13.186: INFO: namespace e2e-tests-hostpath-d8ssr deletion completed in 6.098442356s • [SLOW TEST:12.348 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:29:13.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 19 11:29:13.819: INFO: created pod pod-service-account-defaultsa May 19 11:29:13.819: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 19 11:29:13.854: INFO: created pod pod-service-account-mountsa May 19 11:29:13.854: INFO: pod pod-service-account-mountsa service account token volume mount: true May 19 11:29:13.874: INFO: created pod pod-service-account-nomountsa May 19 11:29:13.874: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 19 11:29:13.900: INFO: created pod pod-service-account-defaultsa-mountspec May 19 11:29:13.900: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 19 11:29:13.924: INFO: created pod pod-service-account-mountsa-mountspec May 19 11:29:13.924: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 19 11:29:14.010: INFO: created pod pod-service-account-nomountsa-mountspec May 19 11:29:14.010: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 19 11:29:14.037: INFO: created pod pod-service-account-defaultsa-nomountspec May 19 11:29:14.037: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 19 11:29:14.088: INFO: created pod pod-service-account-mountsa-nomountspec May 19 11:29:14.088: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 19 11:29:14.104: INFO: created pod pod-service-account-nomountsa-nomountspec May 19 11:29:14.104: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:29:14.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-tk6zg" for this suite. May 19 11:29:44.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:29:44.312: INFO: namespace: e2e-tests-svcaccounts-tk6zg, resource: bindings, ignored listing per whitelist May 19 11:29:44.315: INFO: namespace e2e-tests-svcaccounts-tk6zg deletion completed in 30.101182188s • [SLOW TEST:31.129 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:29:44.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 19 11:29:44.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:44.631: INFO: stderr: "" May 19 11:29:44.631: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 11:29:44.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:44.797: INFO: stderr: "" May 19 11:29:44.797: INFO: stdout: "update-demo-nautilus-769b7 update-demo-nautilus-xwb6k " May 19 11:29:44.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-769b7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:44.897: INFO: stderr: "" May 19 11:29:44.897: INFO: stdout: "" May 19 11:29:44.897: INFO: update-demo-nautilus-769b7 is created but not running May 19 11:29:49.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:50.002: INFO: stderr: "" May 19 11:29:50.002: INFO: stdout: "update-demo-nautilus-769b7 update-demo-nautilus-xwb6k " May 19 11:29:50.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-769b7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:50.103: INFO: stderr: "" May 19 11:29:50.103: INFO: stdout: "true" May 19 11:29:50.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-769b7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:50.219: INFO: stderr: "" May 19 11:29:50.219: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 11:29:50.219: INFO: validating pod update-demo-nautilus-769b7 May 19 11:29:50.223: INFO: got data: { "image": "nautilus.jpg" } May 19 11:29:50.224: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 11:29:50.224: INFO: update-demo-nautilus-769b7 is verified up and running May 19 11:29:50.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwb6k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:50.337: INFO: stderr: "" May 19 11:29:50.337: INFO: stdout: "true" May 19 11:29:50.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwb6k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:50.435: INFO: stderr: "" May 19 11:29:50.435: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 11:29:50.435: INFO: validating pod update-demo-nautilus-xwb6k May 19 11:29:50.438: INFO: got data: { "image": "nautilus.jpg" } May 19 11:29:50.438: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 11:29:50.438: INFO: update-demo-nautilus-xwb6k is verified up and running STEP: scaling down the replication controller May 19 11:29:50.439: INFO: scanned /root for discovery docs: May 19 11:29:50.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:51.578: INFO: stderr: "" May 19 11:29:51.578: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 11:29:51.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:51.684: INFO: stderr: "" May 19 11:29:51.684: INFO: stdout: "update-demo-nautilus-769b7 update-demo-nautilus-xwb6k " STEP: Replicas for name=update-demo: expected=1 actual=2 May 19 11:29:56.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:56.783: INFO: stderr: "" May 19 11:29:56.783: INFO: stdout: "update-demo-nautilus-769b7 " May 19 11:29:56.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-769b7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:56.872: INFO: stderr: "" May 19 11:29:56.872: INFO: stdout: "true" May 19 11:29:56.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-769b7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:56.966: INFO: stderr: "" May 19 11:29:56.966: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 11:29:56.966: INFO: validating pod update-demo-nautilus-769b7 May 19 11:29:56.970: INFO: got data: { "image": "nautilus.jpg" } May 19 11:29:56.970: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 11:29:56.970: INFO: update-demo-nautilus-769b7 is verified up and running STEP: scaling up the replication controller May 19 11:29:56.971: INFO: scanned /root for discovery docs: May 19 11:29:56.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:58.104: INFO: stderr: "" May 19 11:29:58.104: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 11:29:58.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:58.213: INFO: stderr: "" May 19 11:29:58.213: INFO: stdout: "update-demo-nautilus-769b7 update-demo-nautilus-lbt2r " May 19 11:29:58.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-769b7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:58.306: INFO: stderr: "" May 19 11:29:58.306: INFO: stdout: "true" May 19 11:29:58.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-769b7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:58.416: INFO: stderr: "" May 19 11:29:58.416: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 11:29:58.416: INFO: validating pod update-demo-nautilus-769b7 May 19 11:29:58.420: INFO: got data: { "image": "nautilus.jpg" } May 19 11:29:58.420: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 11:29:58.420: INFO: update-demo-nautilus-769b7 is verified up and running May 19 11:29:58.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbt2r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:29:58.529: INFO: stderr: "" May 19 11:29:58.529: INFO: stdout: "" May 19 11:29:58.529: INFO: update-demo-nautilus-lbt2r is created but not running May 19 11:30:03.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h46hd' May 19 11:30:03.635: INFO: stderr: "" May 19 11:30:03.636: INFO: stdout: "update-demo-nautilus-769b7 update-demo-nautilus-lbt2r " May 19 11:30:03.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-769b7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:30:03.731: INFO: stderr: "" May 19 11:30:03.731: INFO: stdout: "true" May 19 11:30:03.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-769b7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:30:03.830: INFO: stderr: "" May 19 11:30:03.830: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 11:30:03.830: INFO: validating pod update-demo-nautilus-769b7 May 19 11:30:03.833: INFO: got data: { "image": "nautilus.jpg" } May 19 11:30:03.833: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 11:30:03.833: INFO: update-demo-nautilus-769b7 is verified up and running May 19 11:30:03.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbt2r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:30:03.937: INFO: stderr: "" May 19 11:30:03.937: INFO: stdout: "true" May 19 11:30:03.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbt2r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h46hd' May 19 11:30:04.042: INFO: stderr: "" May 19 11:30:04.042: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 11:30:04.042: INFO: validating pod update-demo-nautilus-lbt2r May 19 11:30:04.047: INFO: got data: { "image": "nautilus.jpg" } May 19 11:30:04.047: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 11:30:04.047: INFO: update-demo-nautilus-lbt2r is verified up and running STEP: using delete to clean up resources May 19 11:30:04.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-h46hd' May 19 11:30:04.152: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 11:30:04.152: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 19 11:30:04.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-h46hd' May 19 11:30:04.259: INFO: stderr: "No resources found.\n" May 19 11:30:04.259: INFO: stdout: "" May 19 11:30:04.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-h46hd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 11:30:04.567: INFO: stderr: "" May 19 11:30:04.567: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:30:04.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h46hd" for this suite. May 19 11:30:26.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:30:26.601: INFO: namespace: e2e-tests-kubectl-h46hd, resource: bindings, ignored listing per whitelist May 19 11:30:26.669: INFO: namespace e2e-tests-kubectl-h46hd deletion completed in 22.099125899s • [SLOW TEST:42.354 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:30:26.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-22fc4d1b-99c4-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 11:30:26.798: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-22fd7f0f-99c4-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-fbt7c" to be "success or failure" May 19 11:30:26.801: INFO: Pod "pod-projected-secrets-22fd7f0f-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.534184ms May 19 11:30:28.806: INFO: Pod "pod-projected-secrets-22fd7f0f-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007974897s May 19 11:30:30.810: INFO: Pod "pod-projected-secrets-22fd7f0f-99c4-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01252974s STEP: Saw pod success May 19 11:30:30.810: INFO: Pod "pod-projected-secrets-22fd7f0f-99c4-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:30:30.814: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-22fd7f0f-99c4-11ea-abcb-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 19 11:30:30.848: INFO: Waiting for pod pod-projected-secrets-22fd7f0f-99c4-11ea-abcb-0242ac110018 to disappear May 19 11:30:30.918: INFO: Pod pod-projected-secrets-22fd7f0f-99c4-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:30:30.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fbt7c" for this suite. May 19 11:30:37.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:30:37.069: INFO: namespace: e2e-tests-projected-fbt7c, resource: bindings, ignored listing per whitelist May 19 11:30:37.103: INFO: namespace e2e-tests-projected-fbt7c deletion completed in 6.181222003s • [SLOW TEST:10.434 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:30:37.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:30:37.201: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:30:41.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-m57t9" for this suite. May 19 11:31:23.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:31:23.336: INFO: namespace: e2e-tests-pods-m57t9, resource: bindings, ignored listing per whitelist May 19 11:31:23.350: INFO: namespace e2e-tests-pods-m57t9 deletion completed in 42.106919509s • [SLOW TEST:46.246 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:31:23.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 19 11:31:23.503: INFO: Waiting up to 5m0s for pod "pod-44c8e48f-99c4-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-67lrh" to be "success or failure" May 19 11:31:23.506: INFO: Pod "pod-44c8e48f-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.401118ms May 19 11:31:25.510: INFO: Pod "pod-44c8e48f-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007296296s May 19 11:31:27.513: INFO: Pod "pod-44c8e48f-99c4-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010435495s STEP: Saw pod success May 19 11:31:27.513: INFO: Pod "pod-44c8e48f-99c4-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:31:27.516: INFO: Trying to get logs from node hunter-worker2 pod pod-44c8e48f-99c4-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:31:27.567: INFO: Waiting for pod pod-44c8e48f-99c4-11ea-abcb-0242ac110018 to disappear May 19 11:31:27.592: INFO: Pod pod-44c8e48f-99c4-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:31:27.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-67lrh" for this suite. May 19 11:31:33.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:31:33.648: INFO: namespace: e2e-tests-emptydir-67lrh, resource: bindings, ignored listing per whitelist May 19 11:31:33.683: INFO: namespace e2e-tests-emptydir-67lrh deletion completed in 6.08813417s • [SLOW TEST:10.333 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:31:33.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 19 11:31:33.900: INFO: Waiting up to 5m0s for pod "client-containers-4afcc7e0-99c4-11ea-abcb-0242ac110018" in namespace "e2e-tests-containers-86pc6" to be "success or failure" May 19 11:31:33.923: INFO: Pod "client-containers-4afcc7e0-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.941993ms May 19 11:31:36.084: INFO: Pod "client-containers-4afcc7e0-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183381068s May 19 11:31:38.167: INFO: Pod "client-containers-4afcc7e0-99c4-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.266931627s May 19 11:31:40.171: INFO: Pod "client-containers-4afcc7e0-99c4-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.270616164s STEP: Saw pod success May 19 11:31:40.171: INFO: Pod "client-containers-4afcc7e0-99c4-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:31:40.174: INFO: Trying to get logs from node hunter-worker2 pod client-containers-4afcc7e0-99c4-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:31:40.193: INFO: Waiting for pod client-containers-4afcc7e0-99c4-11ea-abcb-0242ac110018 to disappear May 19 11:31:40.198: INFO: Pod client-containers-4afcc7e0-99c4-11ea-abcb-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:31:40.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-86pc6" for this suite. May 19 11:31:46.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:31:46.250: INFO: namespace: e2e-tests-containers-86pc6, resource: bindings, ignored listing per whitelist May 19 11:31:46.289: INFO: namespace e2e-tests-containers-86pc6 deletion completed in 6.088638294s • [SLOW TEST:12.606 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:31:46.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 19 11:31:46.446: INFO: Waiting up to 5m0s for pod "pod-526faa48-99c4-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-8vmvv" to be "success or failure" May 19 11:31:46.478: INFO: Pod "pod-526faa48-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.43474ms May 19 11:31:48.485: INFO: Pod "pod-526faa48-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03874911s May 19 11:31:50.489: INFO: Pod "pod-526faa48-99c4-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043555992s STEP: Saw pod success May 19 11:31:50.489: INFO: Pod "pod-526faa48-99c4-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:31:50.492: INFO: Trying to get logs from node hunter-worker pod pod-526faa48-99c4-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:31:50.508: INFO: Waiting for pod pod-526faa48-99c4-11ea-abcb-0242ac110018 to disappear May 19 11:31:50.513: INFO: Pod pod-526faa48-99c4-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:31:50.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8vmvv" for this suite. May 19 11:31:56.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:31:56.701: INFO: namespace: e2e-tests-emptydir-8vmvv, resource: bindings, ignored listing per whitelist May 19 11:31:56.760: INFO: namespace e2e-tests-emptydir-8vmvv deletion completed in 6.189784867s • [SLOW TEST:10.470 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:31:56.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 19 11:31:57.173: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:32:07.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-zdvfp" for this suite. May 19 11:32:29.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:32:29.229: INFO: namespace: e2e-tests-init-container-zdvfp, resource: bindings, ignored listing per whitelist May 19 11:32:29.286: INFO: namespace e2e-tests-init-container-zdvfp deletion completed in 22.11788178s • [SLOW TEST:32.526 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:32:29.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 19 11:32:29.404: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 11:32:29.413: INFO: Waiting for terminating namespaces to be deleted... May 19 11:32:29.415: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 19 11:32:29.419: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 19 11:32:29.419: INFO: Container kube-proxy ready: true, restart count 0 May 19 11:32:29.419: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 19 11:32:29.419: INFO: Container kindnet-cni ready: true, restart count 0 May 19 11:32:29.419: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 19 11:32:29.419: INFO: Container coredns ready: true, restart count 0 May 19 11:32:29.420: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 19 11:32:29.423: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 19 11:32:29.423: INFO: Container kindnet-cni ready: true, restart count 0 May 19 11:32:29.423: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 19 11:32:29.423: INFO: Container coredns ready: true, restart count 0 May 19 11:32:29.423: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 19 11:32:29.423: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 19 11:32:29.498: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 19 11:32:29.498: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 19 11:32:29.498: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 19 11:32:29.498: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 19 11:32:29.498: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 19 11:32:29.498: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6c215833-99c4-11ea-abcb-0242ac110018.16106aaa9f470bbe], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2wtf7/filler-pod-6c215833-99c4-11ea-abcb-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c215833-99c4-11ea-abcb-0242ac110018.16106aaaee6a1200], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c215833-99c4-11ea-abcb-0242ac110018.16106aab3b67c12e], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c215833-99c4-11ea-abcb-0242ac110018.16106aab5634d479], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c2240b0-99c4-11ea-abcb-0242ac110018.16106aaa9fa2644b], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2wtf7/filler-pod-6c2240b0-99c4-11ea-abcb-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c2240b0-99c4-11ea-abcb-0242ac110018.16106aab2e2bae91], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c2240b0-99c4-11ea-abcb-0242ac110018.16106aab79f69d9c], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c2240b0-99c4-11ea-abcb-0242ac110018.16106aab8b7ffc6b], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.16106aac066fbfb4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:32:36.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-2wtf7" for this suite. May 19 11:32:42.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:32:42.747: INFO: namespace: e2e-tests-sched-pred-2wtf7, resource: bindings, ignored listing per whitelist May 19 11:32:42.800: INFO: namespace e2e-tests-sched-pred-2wtf7 deletion completed in 6.08852533s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.514 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:32:42.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:32:43.162: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 19 11:32:48.166: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 11:32:48.166: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 19 11:32:48.354: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-lbj76,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lbj76/deployments/test-cleanup-deployment,UID:7742bc0d-99c4-11ea-99e8-0242ac110002,ResourceVersion:11392485,Generation:1,CreationTimestamp:2020-05-19 11:32:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 19 11:32:48.387: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 19 11:32:48.387: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 19 11:32:48.387: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-lbj76,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lbj76/replicasets/test-cleanup-controller,UID:743eaa05-99c4-11ea-99e8-0242ac110002,ResourceVersion:11392486,Generation:1,CreationTimestamp:2020-05-19 11:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7742bc0d-99c4-11ea-99e8-0242ac110002 0xc000e3bbd7 0xc000e3bbd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 19 11:32:48.407: INFO: Pod "test-cleanup-controller-sbrq4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-sbrq4,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-lbj76,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lbj76/pods/test-cleanup-controller-sbrq4,UID:7446ea10-99c4-11ea-99e8-0242ac110002,ResourceVersion:11392480,Generation:0,CreationTimestamp:2020-05-19 11:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 743eaa05-99c4-11ea-99e8-0242ac110002 0xc000b66997 0xc000b66998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-86nkf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-86nkf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-86nkf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b66a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b66a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:32:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:32:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:32:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:32:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.173,StartTime:2020-05-19 11:32:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 11:32:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9ddeda2b0e15f9ac0c2492374747c857f9a3a99a5bbc60f1ecc2f7e52a8deef2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:32:48.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-lbj76" for this suite. May 19 11:32:57.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:32:57.065: INFO: namespace: e2e-tests-deployment-lbj76, resource: bindings, ignored listing per whitelist May 19 11:32:57.162: INFO: namespace e2e-tests-deployment-lbj76 deletion completed in 8.554487954s • [SLOW TEST:14.362 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:32:57.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 19 11:32:57.328: INFO: Waiting up to 5m0s for pod "pod-7cb71c36-99c4-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-7lfv2" to be "success or failure" May 19 11:32:57.362: INFO: Pod "pod-7cb71c36-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.186923ms May 19 11:32:59.366: INFO: Pod "pod-7cb71c36-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038352895s May 19 11:33:01.371: INFO: Pod "pod-7cb71c36-99c4-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042645501s STEP: Saw pod success May 19 11:33:01.371: INFO: Pod "pod-7cb71c36-99c4-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:33:01.374: INFO: Trying to get logs from node hunter-worker2 pod pod-7cb71c36-99c4-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:33:01.549: INFO: Waiting for pod pod-7cb71c36-99c4-11ea-abcb-0242ac110018 to disappear May 19 11:33:01.580: INFO: Pod pod-7cb71c36-99c4-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:33:01.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7lfv2" for this suite. May 19 11:33:07.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:33:07.702: INFO: namespace: e2e-tests-emptydir-7lfv2, resource: bindings, ignored listing per whitelist May 19 11:33:07.702: INFO: namespace e2e-tests-emptydir-7lfv2 deletion completed in 6.117470149s • [SLOW TEST:10.539 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:33:07.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:33:14.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-trzmj" for this suite. May 19 11:33:36.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:33:36.909: INFO: namespace: e2e-tests-replication-controller-trzmj, resource: bindings, ignored listing per whitelist May 19 11:33:36.965: INFO: namespace e2e-tests-replication-controller-trzmj deletion completed in 22.085757808s • [SLOW TEST:29.263 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:33:36.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:33:37.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 19 11:33:37.192: INFO: stderr: "" May 19 11:33:37.192: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:33:37.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2qddp" for this suite. May 19 11:33:43.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:33:43.252: INFO: namespace: e2e-tests-kubectl-2qddp, resource: bindings, ignored listing per whitelist May 19 11:33:43.306: INFO: namespace e2e-tests-kubectl-2qddp deletion completed in 6.101803854s • [SLOW TEST:6.341 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:33:43.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:33:47.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-b5hz5" for this suite. May 19 11:33:53.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:33:53.682: INFO: namespace: e2e-tests-emptydir-wrapper-b5hz5, resource: bindings, ignored listing per whitelist May 19 11:33:53.711: INFO: namespace e2e-tests-emptydir-wrapper-b5hz5 deletion completed in 6.119766321s • [SLOW TEST:10.405 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:33:53.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 19 11:33:53.830: INFO: Waiting up to 5m0s for pod "pod-9e5de907-99c4-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-r9mfz" to be "success or failure" May 19 11:33:53.837: INFO: Pod "pod-9e5de907-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.169062ms May 19 11:33:55.842: INFO: Pod "pod-9e5de907-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011963467s May 19 11:33:57.846: INFO: Pod "pod-9e5de907-99c4-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016517166s STEP: Saw pod success May 19 11:33:57.847: INFO: Pod "pod-9e5de907-99c4-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:33:57.849: INFO: Trying to get logs from node hunter-worker2 pod pod-9e5de907-99c4-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:33:57.890: INFO: Waiting for pod pod-9e5de907-99c4-11ea-abcb-0242ac110018 to disappear May 19 11:33:57.911: INFO: Pod pod-9e5de907-99c4-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:33:57.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-r9mfz" for this suite. May 19 11:34:03.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:34:04.012: INFO: namespace: e2e-tests-emptydir-r9mfz, resource: bindings, ignored listing per whitelist May 19 11:34:04.038: INFO: namespace e2e-tests-emptydir-r9mfz deletion completed in 6.123803996s • [SLOW TEST:10.327 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:34:04.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:34:10.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-ffbnx" for this suite. May 19 11:34:16.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:34:16.384: INFO: namespace: e2e-tests-namespaces-ffbnx, resource: bindings, ignored listing per whitelist May 19 11:34:16.503: INFO: namespace e2e-tests-namespaces-ffbnx deletion completed in 6.153885931s STEP: Destroying namespace "e2e-tests-nsdeletetest-wk7mj" for this suite. May 19 11:34:16.506: INFO: Namespace e2e-tests-nsdeletetest-wk7mj was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-cpz4b" for this suite. May 19 11:34:22.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:34:22.542: INFO: namespace: e2e-tests-nsdeletetest-cpz4b, resource: bindings, ignored listing per whitelist May 19 11:34:22.606: INFO: namespace e2e-tests-nsdeletetest-cpz4b deletion completed in 6.100220943s • [SLOW TEST:18.568 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:34:22.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-cxr2w STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cxr2w to expose endpoints map[] May 19 11:34:22.762: INFO: Get endpoints failed (19.112024ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 19 11:34:23.766: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cxr2w exposes endpoints map[] (1.023175634s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-cxr2w STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cxr2w to expose endpoints map[pod1:[100]] May 19 11:34:27.811: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cxr2w exposes endpoints map[pod1:[100]] (4.036984654s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-cxr2w STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cxr2w to expose endpoints map[pod1:[100] pod2:[101]] May 19 11:34:30.916: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cxr2w exposes endpoints map[pod1:[100] pod2:[101]] (3.100643415s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-cxr2w STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cxr2w to expose endpoints map[pod2:[101]] May 19 11:34:31.940: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cxr2w exposes endpoints map[pod2:[101]] (1.018185645s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-cxr2w STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cxr2w to expose endpoints map[] May 19 11:34:32.964: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cxr2w exposes endpoints map[] (1.020015176s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:34:33.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-cxr2w" for this suite. May 19 11:34:55.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:34:55.228: INFO: namespace: e2e-tests-services-cxr2w, resource: bindings, ignored listing per whitelist May 19 11:34:55.263: INFO: namespace e2e-tests-services-cxr2w deletion completed in 22.106939059s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:32.656 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:34:55.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0519 11:35:25.917339 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 11:35:25.917: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:35:25.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-w4cf9" for this suite. May 19 11:35:31.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:35:31.976: INFO: namespace: e2e-tests-gc-w4cf9, resource: bindings, ignored listing per whitelist May 19 11:35:32.017: INFO: namespace e2e-tests-gc-w4cf9 deletion completed in 6.096700985s • [SLOW TEST:36.753 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:35:32.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:35:32.411: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9268965-99c4-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-7tbh6" to be "success or failure" May 19 11:35:32.440: INFO: Pod "downwardapi-volume-d9268965-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.013994ms May 19 11:35:34.445: INFO: Pod "downwardapi-volume-d9268965-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033671226s May 19 11:35:36.449: INFO: Pod "downwardapi-volume-d9268965-99c4-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.037960352s May 19 11:35:38.454: INFO: Pod "downwardapi-volume-d9268965-99c4-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042428372s STEP: Saw pod success May 19 11:35:38.454: INFO: Pod "downwardapi-volume-d9268965-99c4-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:35:38.457: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-d9268965-99c4-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:35:38.475: INFO: Waiting for pod downwardapi-volume-d9268965-99c4-11ea-abcb-0242ac110018 to disappear May 19 11:35:38.512: INFO: Pod downwardapi-volume-d9268965-99c4-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:35:38.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7tbh6" for this suite. May 19 11:35:44.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:35:44.616: INFO: namespace: e2e-tests-downward-api-7tbh6, resource: bindings, ignored listing per whitelist May 19 11:35:44.635: INFO: namespace e2e-tests-downward-api-7tbh6 deletion completed in 6.119615318s • [SLOW TEST:12.618 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:35:44.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 11:35:44.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-4vvdw' May 19 11:35:44.910: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 11:35:44.910: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 19 11:35:48.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-4vvdw' May 19 11:35:49.104: INFO: stderr: "" May 19 11:35:49.104: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:35:49.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4vvdw" for this suite. May 19 11:35:55.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:35:55.207: INFO: namespace: e2e-tests-kubectl-4vvdw, resource: bindings, ignored listing per whitelist May 19 11:35:55.245: INFO: namespace e2e-tests-kubectl-4vvdw deletion completed in 6.136867396s • [SLOW TEST:10.609 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:35:55.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 19 11:35:59.457: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:36:23.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-27z72" for this suite. May 19 11:36:29.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:36:29.551: INFO: namespace: e2e-tests-namespaces-27z72, resource: bindings, ignored listing per whitelist May 19 11:36:29.613: INFO: namespace e2e-tests-namespaces-27z72 deletion completed in 6.081816499s STEP: Destroying namespace "e2e-tests-nsdeletetest-hh69j" for this suite. May 19 11:36:29.615: INFO: Namespace e2e-tests-nsdeletetest-hh69j was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-gz2cj" for this suite. May 19 11:36:35.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:36:35.643: INFO: namespace: e2e-tests-nsdeletetest-gz2cj, resource: bindings, ignored listing per whitelist May 19 11:36:35.712: INFO: namespace e2e-tests-nsdeletetest-gz2cj deletion completed in 6.097003496s • [SLOW TEST:40.467 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:36:35.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:36:35.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fef2ebc0-99c4-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-xf65k" to be "success or failure" May 19 11:36:35.849: INFO: Pod "downwardapi-volume-fef2ebc0-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.100356ms May 19 11:36:37.852: INFO: Pod "downwardapi-volume-fef2ebc0-99c4-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020705507s May 19 11:36:39.856: INFO: Pod "downwardapi-volume-fef2ebc0-99c4-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024512049s STEP: Saw pod success May 19 11:36:39.856: INFO: Pod "downwardapi-volume-fef2ebc0-99c4-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:36:39.860: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-fef2ebc0-99c4-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:36:39.896: INFO: Waiting for pod downwardapi-volume-fef2ebc0-99c4-11ea-abcb-0242ac110018 to disappear May 19 11:36:39.900: INFO: Pod downwardapi-volume-fef2ebc0-99c4-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:36:39.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xf65k" for this suite. May 19 11:36:45.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:36:45.952: INFO: namespace: e2e-tests-downward-api-xf65k, resource: bindings, ignored listing per whitelist May 19 11:36:45.995: INFO: namespace e2e-tests-downward-api-xf65k deletion completed in 6.091064899s • [SLOW TEST:10.282 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:36:45.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-0519127d-99c5-11ea-abcb-0242ac110018 STEP: Creating secret with name s-test-opt-upd-051912ec-99c5-11ea-abcb-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0519127d-99c5-11ea-abcb-0242ac110018 STEP: Updating secret s-test-opt-upd-051912ec-99c5-11ea-abcb-0242ac110018 STEP: Creating secret with name s-test-opt-create-05191316-99c5-11ea-abcb-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:36:54.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d2vvm" for this suite. May 19 11:37:18.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:37:18.282: INFO: namespace: e2e-tests-projected-d2vvm, resource: bindings, ignored listing per whitelist May 19 11:37:18.354: INFO: namespace e2e-tests-projected-d2vvm deletion completed in 24.097912174s • [SLOW TEST:32.359 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:37:18.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-m75fz [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-m75fz STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-m75fz STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-m75fz STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-m75fz May 19 11:37:22.699: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-m75fz, name: ss-0, uid: 1a5a8ca0-99c5-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 19 11:37:31.246: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-m75fz, name: ss-0, uid: 1a5a8ca0-99c5-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 19 11:37:31.470: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-m75fz, name: ss-0, uid: 1a5a8ca0-99c5-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 19 11:37:31.619: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-m75fz STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-m75fz STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-m75fz and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 19 11:37:42.414: INFO: Deleting all statefulset in ns e2e-tests-statefulset-m75fz May 19 11:37:42.418: INFO: Scaling statefulset ss to 0 May 19 11:37:52.440: INFO: Waiting for statefulset status.replicas updated to 0 May 19 11:37:52.443: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:37:52.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-m75fz" for this suite. May 19 11:37:58.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:37:58.570: INFO: namespace: e2e-tests-statefulset-m75fz, resource: bindings, ignored listing per whitelist May 19 11:37:58.574: INFO: namespace e2e-tests-statefulset-m75fz deletion completed in 6.083640214s • [SLOW TEST:40.220 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:37:58.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 19 11:37:58.662: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 19 11:37:58.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:01.527: INFO: stderr: "" May 19 11:38:01.527: INFO: stdout: "service/redis-slave created\n" May 19 11:38:01.527: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 19 11:38:01.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:01.826: INFO: stderr: "" May 19 11:38:01.826: INFO: stdout: "service/redis-master created\n" May 19 11:38:01.826: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 19 11:38:01.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:02.121: INFO: stderr: "" May 19 11:38:02.121: INFO: stdout: "service/frontend created\n" May 19 11:38:02.121: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 19 11:38:02.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:02.368: INFO: stderr: "" May 19 11:38:02.368: INFO: stdout: "deployment.extensions/frontend created\n" May 19 11:38:02.368: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 19 11:38:02.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:02.710: INFO: stderr: "" May 19 11:38:02.710: INFO: stdout: "deployment.extensions/redis-master created\n" May 19 11:38:02.710: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 19 11:38:02.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:02.978: INFO: stderr: "" May 19 11:38:02.978: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 19 11:38:02.978: INFO: Waiting for all frontend pods to be Running. May 19 11:38:13.030: INFO: Waiting for frontend to serve content. May 19 11:38:13.058: INFO: Trying to add a new entry to the guestbook. May 19 11:38:13.075: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 19 11:38:13.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:13.286: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 11:38:13.286: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 19 11:38:13.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:13.428: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 11:38:13.428: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 19 11:38:13.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:13.585: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 11:38:13.585: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 11:38:13.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:13.734: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 11:38:13.734: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 11:38:13.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:13.848: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 11:38:13.848: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 19 11:38:13.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6zhjn' May 19 11:38:14.258: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 11:38:14.258: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:38:14.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6zhjn" for this suite. May 19 11:38:52.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:38:52.521: INFO: namespace: e2e-tests-kubectl-6zhjn, resource: bindings, ignored listing per whitelist May 19 11:38:52.550: INFO: namespace e2e-tests-kubectl-6zhjn deletion completed in 38.262199879s • [SLOW TEST:53.976 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:38:52.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:38:52.624: INFO: Creating deployment "nginx-deployment" May 19 11:38:52.676: INFO: Waiting for observed generation 1 May 19 11:38:54.684: INFO: Waiting for all required pods to come up May 19 11:38:54.690: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 19 11:39:04.997: INFO: Waiting for deployment "nginx-deployment" to complete May 19 11:39:05.001: INFO: Updating deployment "nginx-deployment" with a non-existent image May 19 11:39:05.006: INFO: Updating deployment nginx-deployment May 19 11:39:05.006: INFO: Waiting for observed generation 2 May 19 11:39:07.126: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 19 11:39:07.129: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 19 11:39:07.131: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 19 11:39:07.138: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 19 11:39:07.138: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 19 11:39:07.140: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 19 11:39:07.145: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 19 11:39:07.145: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 19 11:39:07.150: INFO: Updating deployment nginx-deployment May 19 11:39:07.150: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 19 11:39:07.318: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 19 11:39:07.485: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 19 11:39:07.689: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8fsr6/deployments/nginx-deployment,UID:507df244-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394195,Generation:3,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-19 11:39:05 +0000 UTC 2020-05-19 11:38:52 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-19 11:39:07 +0000 UTC 2020-05-19 11:39:07 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 19 11:39:07.771: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8fsr6/replicasets/nginx-deployment-5c98f8fb5,UID:57df3b96-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394231,Generation:3,CreationTimestamp:2020-05-19 11:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 507df244-99c5-11ea-99e8-0242ac110002 0xc001b2aa67 0xc001b2aa68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 11:39:07.771: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 19 11:39:07.772: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8fsr6/replicasets/nginx-deployment-85ddf47c5d,UID:50880b19-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394227,Generation:3,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 507df244-99c5-11ea-99e8-0242ac110002 0xc001b2ab87 0xc001b2ab88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 19 11:39:07.826: INFO: Pod "nginx-deployment-5c98f8fb5-9fxqm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9fxqm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-9fxqm,UID:5817eaa6-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394163,Generation:0,CreationTimestamp:2020-05-19 11:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f18467 0xc001f18468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f184f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-19 11:39:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.827: INFO: Pod "nginx-deployment-5c98f8fb5-9vq4n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9vq4n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-9vq4n,UID:595d4dfc-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394222,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f18687 0xc001f18688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18700} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.827: INFO: Pod "nginx-deployment-5c98f8fb5-bw88g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bw88g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-bw88g,UID:5963f80a-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394228,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f18797 0xc001f18798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18810} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.827: INFO: Pod "nginx-deployment-5c98f8fb5-dj8ph" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dj8ph,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-dj8ph,UID:5812cc89-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394164,Generation:0,CreationTimestamp:2020-05-19 11:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f188a7 0xc001f188a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18920} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-19 11:39:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.827: INFO: Pod "nginx-deployment-5c98f8fb5-fcr8b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fcr8b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-fcr8b,UID:593ff6ff-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394233,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f18a07 0xc001f18a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-19 11:39:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.827: INFO: Pod "nginx-deployment-5c98f8fb5-hzrww" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hzrww,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-hzrww,UID:57e83f29-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394140,Generation:0,CreationTimestamp:2020-05-19 11:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f18b67 0xc001f18b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-19 11:39:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.827: INFO: Pod "nginx-deployment-5c98f8fb5-l2c7d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l2c7d,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-l2c7d,UID:595d5d22-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394221,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f18cc7 0xc001f18cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18d40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.828: INFO: Pod "nginx-deployment-5c98f8fb5-mpp9t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mpp9t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-mpp9t,UID:57e8512b-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394149,Generation:0,CreationTimestamp:2020-05-19 11:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f18dd7 0xc001f18dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-19 11:39:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.828: INFO: Pod "nginx-deployment-5c98f8fb5-n22jj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n22jj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-n22jj,UID:595d30e1-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394215,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f18f37 0xc001f18f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.828: INFO: Pod "nginx-deployment-5c98f8fb5-ncnlt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ncnlt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-ncnlt,UID:57e47ea2-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394134,Generation:0,CreationTimestamp:2020-05-19 11:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f190a7 0xc001f190a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19120} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-19 11:39:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.828: INFO: Pod "nginx-deployment-5c98f8fb5-ssxlr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ssxlr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-ssxlr,UID:595d3ce0-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394224,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f19207 0xc001f19208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19280} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f192a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.828: INFO: Pod "nginx-deployment-5c98f8fb5-vqwxq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vqwxq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-vqwxq,UID:5959b331-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394211,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f19317 0xc001f19318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19390} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f193b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.828: INFO: Pod "nginx-deployment-5c98f8fb5-wxt55" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wxt55,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-5c98f8fb5-wxt55,UID:5959b071-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394208,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 57df3b96-99c5-11ea-99e8-0242ac110002 0xc001f19427 0xc001f19428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f194a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f194c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.829: INFO: Pod "nginx-deployment-85ddf47c5d-29sbw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-29sbw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-29sbw,UID:595d6571-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394218,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc001f19537 0xc001f19538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f195b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f195e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.829: INFO: Pod "nginx-deployment-85ddf47c5d-2htnc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2htnc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-2htnc,UID:595d5ec3-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394223,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc001f19657 0xc001f19658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f196d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f196f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.829: INFO: Pod "nginx-deployment-85ddf47c5d-2nzd4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2nzd4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-2nzd4,UID:5959a910-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394210,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc001f19767 0xc001f19768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f197e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.829: INFO: Pod "nginx-deployment-85ddf47c5d-777w2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-777w2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-777w2,UID:508ca53e-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394051,Generation:0,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc001f19877 0xc001f19878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.185,StartTime:2020-05-19 11:38:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 11:38:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b06b1bc5280c5daf974318df49eb921726a4d2e680dedc368557806d26a06dac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.829: INFO: Pod "nginx-deployment-85ddf47c5d-8949j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8949j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-8949j,UID:595d5ba5-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394220,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc001f19c87 0xc001f19c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19d40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.830: INFO: Pod "nginx-deployment-85ddf47c5d-972k6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-972k6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-972k6,UID:5959b11e-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394212,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc001f19dd7 0xc001f19dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.830: INFO: Pod "nginx-deployment-85ddf47c5d-9m9g9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9m9g9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-9m9g9,UID:508fa310-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394071,Generation:0,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc001f19f17 0xc001f19f18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236c030} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236c050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.205,StartTime:2020-05-19 11:38:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 11:38:59 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://64323b7fc778703db1929e2d5d04440e711777196215739a7b37c0c60a5f2b00}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.830: INFO: Pod "nginx-deployment-85ddf47c5d-bhj6n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bhj6n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-bhj6n,UID:50921f0a-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394092,Generation:0,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236c117 0xc00236c118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236c190} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236c1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.206,StartTime:2020-05-19 11:38:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 11:39:00 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c346f19b72fce6075bab165d55a1a31f94a3cf77829b4f3a3f72076628fee284}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.830: INFO: Pod "nginx-deployment-85ddf47c5d-kwgjv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kwgjv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-kwgjv,UID:59402359-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394194,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236c2f7 0xc00236c2f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236c370} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236c390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.830: INFO: Pod "nginx-deployment-85ddf47c5d-njhng" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-njhng,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-njhng,UID:595d3e74-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394217,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236c407 0xc00236c408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236c480} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236c4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.830: INFO: Pod "nginx-deployment-85ddf47c5d-pk7sn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pk7sn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-pk7sn,UID:508f97ae-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394082,Generation:0,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236c517 0xc00236c518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236c590} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236c5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.187,StartTime:2020-05-19 11:38:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 11:39:00 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4828c7064b44fb85e6bc5202d6b342620c1047f7b9657e13ba84345fc68c08ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.830: INFO: Pod "nginx-deployment-85ddf47c5d-qcbmt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qcbmt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-qcbmt,UID:50922e1b-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394089,Generation:0,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236c6f7 0xc00236c6f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236c770} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236c790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.207,StartTime:2020-05-19 11:38:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 11:39:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://73f92c479377169ec7f694d9e478496f1e4df95a762b8ca2015ef5ab5d24426d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.831: INFO: Pod "nginx-deployment-85ddf47c5d-s5txf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s5txf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-s5txf,UID:508e6c94-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394105,Generation:0,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236c857 0xc00236c858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236cab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236cad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.208,StartTime:2020-05-19 11:38:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 11:39:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://018dc172870bcfeac9762ac65a539e5d3494d13896f393933b54f143d0f58929}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.831: INFO: Pod "nginx-deployment-85ddf47c5d-sg6zr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sg6zr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-sg6zr,UID:5959af9e-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394209,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236cbb7 0xc00236cbb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236ccc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236cce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.831: INFO: Pod "nginx-deployment-85ddf47c5d-sh9m6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sh9m6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-sh9m6,UID:50922f37-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394096,Generation:0,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236cd57 0xc00236cd58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236cfd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236cff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.189,StartTime:2020-05-19 11:38:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 11:39:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3ab7f54bfba6b6e9ed7d3b15cb1f98dc180c7bd177598f2780517a2007a2627f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.831: INFO: Pod "nginx-deployment-85ddf47c5d-svkjq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-svkjq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-svkjq,UID:592b3118-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394235,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236d0b7 0xc00236d0b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236d170} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236d200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-19 11:39:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.831: INFO: Pod "nginx-deployment-85ddf47c5d-vt24d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vt24d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-vt24d,UID:59400960-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394193,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236d457 0xc00236d458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00236d700} {node.kubernetes.io/unreachable Exists NoExecute 0xc00236d720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.832: INFO: Pod "nginx-deployment-85ddf47c5d-wcrfl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wcrfl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-wcrfl,UID:595d3de1-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394216,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc00236d7f7 0xc00236d7f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023c01c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023c01e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.832: INFO: Pod "nginx-deployment-85ddf47c5d-xw5dt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xw5dt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-xw5dt,UID:5959af5b-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394197,Generation:0,CreationTimestamp:2020-05-19 11:39:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc0023c0257 0xc0023c0258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023c0650} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023c0670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:39:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 11:39:07.832: INFO: Pod "nginx-deployment-85ddf47c5d-zmm87" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zmm87,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8fsr6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8fsr6/pods/nginx-deployment-85ddf47c5d-zmm87,UID:508f9718-99c5-11ea-99e8-0242ac110002,ResourceVersion:11394065,Generation:0,CreationTimestamp:2020-05-19 11:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50880b19-99c5-11ea-99e8-0242ac110002 0xc0023c06e7 0xc0023c06e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9nfnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nfnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nfnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023c0760} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023c07c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 11:38:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.186,StartTime:2020-05-19 11:38:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 11:38:59 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c3fb55f6cecf4f998b115680f8ad2e990d63b6f2b133209a2bfe3b29d00b1793}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:39:07.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-8fsr6" for this suite. May 19 11:39:30.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:39:30.076: INFO: namespace: e2e-tests-deployment-8fsr6, resource: bindings, ignored listing per whitelist May 19 11:39:30.128: INFO: namespace e2e-tests-deployment-8fsr6 deletion completed in 22.199629584s • [SLOW TEST:37.577 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:39:30.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:39:30.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zm4qm" for this suite. May 19 11:39:52.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:39:52.817: INFO: namespace: e2e-tests-pods-zm4qm, resource: bindings, ignored listing per whitelist May 19 11:39:52.829: INFO: namespace e2e-tests-pods-zm4qm deletion completed in 22.335734272s • [SLOW TEST:22.701 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:39:52.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 19 11:39:52.948: INFO: Waiting up to 5m0s for pod "pod-74719e7d-99c5-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-rlp49" to be "success or failure" May 19 11:39:52.963: INFO: Pod "pod-74719e7d-99c5-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.877943ms May 19 11:39:54.966: INFO: Pod "pod-74719e7d-99c5-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018660406s May 19 11:39:56.970: INFO: Pod "pod-74719e7d-99c5-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.022699709s May 19 11:39:58.975: INFO: Pod "pod-74719e7d-99c5-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02717607s STEP: Saw pod success May 19 11:39:58.975: INFO: Pod "pod-74719e7d-99c5-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:39:58.979: INFO: Trying to get logs from node hunter-worker2 pod pod-74719e7d-99c5-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:39:58.999: INFO: Waiting for pod pod-74719e7d-99c5-11ea-abcb-0242ac110018 to disappear May 19 11:39:59.003: INFO: Pod pod-74719e7d-99c5-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:39:59.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rlp49" for this suite. May 19 11:40:05.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:40:05.028: INFO: namespace: e2e-tests-emptydir-rlp49, resource: bindings, ignored listing per whitelist May 19 11:40:05.085: INFO: namespace e2e-tests-emptydir-rlp49 deletion completed in 6.078016961s • [SLOW TEST:12.255 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:40:05.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 19 11:40:05.221: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:40:11.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-4fk8r" for this suite. May 19 11:40:17.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:40:17.108: INFO: namespace: e2e-tests-init-container-4fk8r, resource: bindings, ignored listing per whitelist May 19 11:40:17.128: INFO: namespace e2e-tests-init-container-4fk8r deletion completed in 6.095463026s • [SLOW TEST:12.042 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:40:17.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-zkxf STEP: Creating a pod to test atomic-volume-subpath May 19 11:40:17.312: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zkxf" in namespace "e2e-tests-subpath-kmh47" to be "success or failure" May 19 11:40:17.317: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.727518ms May 19 11:40:19.345: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03348955s May 19 11:40:21.349: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037405278s May 19 11:40:23.353: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 6.041726445s May 19 11:40:25.357: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 8.045305504s May 19 11:40:27.360: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 10.048648555s May 19 11:40:29.365: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 12.053241955s May 19 11:40:31.369: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 14.057411485s May 19 11:40:33.373: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 16.06172886s May 19 11:40:35.376: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 18.064782619s May 19 11:40:37.381: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 20.069401592s May 19 11:40:39.385: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 22.073768913s May 19 11:40:41.445: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Running", Reason="", readiness=false. Elapsed: 24.133443853s May 19 11:40:43.449: INFO: Pod "pod-subpath-test-downwardapi-zkxf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.13730145s STEP: Saw pod success May 19 11:40:43.449: INFO: Pod "pod-subpath-test-downwardapi-zkxf" satisfied condition "success or failure" May 19 11:40:43.452: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-zkxf container test-container-subpath-downwardapi-zkxf: STEP: delete the pod May 19 11:40:43.515: INFO: Waiting for pod pod-subpath-test-downwardapi-zkxf to disappear May 19 11:40:43.519: INFO: Pod pod-subpath-test-downwardapi-zkxf no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zkxf May 19 11:40:43.519: INFO: Deleting pod "pod-subpath-test-downwardapi-zkxf" in namespace "e2e-tests-subpath-kmh47" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:40:43.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-kmh47" for this suite. May 19 11:40:49.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:40:49.590: INFO: namespace: e2e-tests-subpath-kmh47, resource: bindings, ignored listing per whitelist May 19 11:40:49.613: INFO: namespace e2e-tests-subpath-kmh47 deletion completed in 6.08982139s • [SLOW TEST:32.486 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:40:49.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:40:49.717: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.10773ms) May 19 11:40:49.721: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.041802ms) May 19 11:40:49.725: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.840128ms) May 19 11:40:49.728: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.728537ms) May 19 11:40:49.731: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.110358ms) May 19 11:40:49.734: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.992973ms) May 19 11:40:49.736: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.456203ms) May 19 11:40:49.739: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.881706ms) May 19 11:40:49.743: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.438721ms) May 19 11:40:49.746: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.173733ms) May 19 11:40:49.775: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 28.706797ms) May 19 11:40:49.779: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.507664ms) May 19 11:40:49.783: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.96875ms) May 19 11:40:49.786: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.26031ms) May 19 11:40:49.790: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.004748ms) May 19 11:40:49.793: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.135894ms) May 19 11:40:49.796: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.240086ms) May 19 11:40:49.800: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.609115ms) May 19 11:40:49.803: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.204989ms) May 19 11:40:49.806: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.212496ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:40:49.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-bhqml" for this suite. May 19 11:40:55.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:40:55.895: INFO: namespace: e2e-tests-proxy-bhqml, resource: bindings, ignored listing per whitelist May 19 11:40:55.936: INFO: namespace e2e-tests-proxy-bhqml deletion completed in 6.12629676s • [SLOW TEST:6.322 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:40:55.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 19 11:40:56.074: INFO: Waiting up to 5m0s for pod "pod-9a0d2dd1-99c5-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-fl89z" to be "success or failure" May 19 11:40:56.089: INFO: Pod "pod-9a0d2dd1-99c5-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.017874ms May 19 11:40:58.093: INFO: Pod "pod-9a0d2dd1-99c5-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018936128s May 19 11:41:00.096: INFO: Pod "pod-9a0d2dd1-99c5-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022856574s STEP: Saw pod success May 19 11:41:00.097: INFO: Pod "pod-9a0d2dd1-99c5-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:41:00.100: INFO: Trying to get logs from node hunter-worker pod pod-9a0d2dd1-99c5-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:41:00.152: INFO: Waiting for pod pod-9a0d2dd1-99c5-11ea-abcb-0242ac110018 to disappear May 19 11:41:00.167: INFO: Pod pod-9a0d2dd1-99c5-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:41:00.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fl89z" for this suite. May 19 11:41:06.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:41:06.253: INFO: namespace: e2e-tests-emptydir-fl89z, resource: bindings, ignored listing per whitelist May 19 11:41:06.260: INFO: namespace e2e-tests-emptydir-fl89z deletion completed in 6.088891545s • [SLOW TEST:10.324 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:41:06.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-8pvl8 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 19 11:41:06.434: INFO: Found 0 stateful pods, waiting for 3 May 19 11:41:16.440: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 11:41:16.440: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 11:41:16.440: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 19 11:41:26.438: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 11:41:26.438: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 11:41:26.438: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 19 11:41:26.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8pvl8 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 11:41:26.728: INFO: stderr: "I0519 11:41:26.588142 2114 log.go:172] (0xc000138160) (0xc0008a8500) Create stream\nI0519 11:41:26.588210 2114 log.go:172] (0xc000138160) (0xc0008a8500) Stream added, broadcasting: 1\nI0519 11:41:26.591025 2114 log.go:172] (0xc000138160) Reply frame received for 1\nI0519 11:41:26.591083 2114 log.go:172] (0xc000138160) (0xc0006d6000) Create stream\nI0519 11:41:26.591106 2114 log.go:172] (0xc000138160) (0xc0006d6000) Stream added, broadcasting: 3\nI0519 11:41:26.592054 2114 log.go:172] (0xc000138160) Reply frame received for 3\nI0519 11:41:26.592085 2114 log.go:172] (0xc000138160) (0xc000532c80) Create stream\nI0519 11:41:26.592101 2114 log.go:172] (0xc000138160) (0xc000532c80) Stream added, broadcasting: 5\nI0519 11:41:26.593056 2114 log.go:172] (0xc000138160) Reply frame received for 5\nI0519 11:41:26.718374 2114 log.go:172] (0xc000138160) Data frame received for 3\nI0519 11:41:26.718412 2114 log.go:172] (0xc0006d6000) (3) Data frame handling\nI0519 11:41:26.718426 2114 log.go:172] (0xc0006d6000) (3) Data frame sent\nI0519 11:41:26.718434 2114 log.go:172] (0xc000138160) Data frame received for 3\nI0519 11:41:26.718440 2114 log.go:172] (0xc0006d6000) (3) Data frame handling\nI0519 11:41:26.718529 2114 log.go:172] (0xc000138160) Data frame received for 5\nI0519 11:41:26.718542 2114 log.go:172] (0xc000532c80) (5) Data frame handling\nI0519 11:41:26.720449 2114 log.go:172] (0xc000138160) Data frame received for 1\nI0519 11:41:26.720468 2114 log.go:172] (0xc0008a8500) (1) Data frame handling\nI0519 11:41:26.720479 2114 log.go:172] (0xc0008a8500) (1) Data frame sent\nI0519 11:41:26.720645 2114 log.go:172] (0xc000138160) (0xc0008a8500) Stream removed, broadcasting: 1\nI0519 11:41:26.720838 2114 log.go:172] (0xc000138160) Go away received\nI0519 11:41:26.720933 2114 log.go:172] (0xc000138160) (0xc0008a8500) Stream removed, broadcasting: 1\nI0519 11:41:26.720968 2114 log.go:172] (0xc000138160) (0xc0006d6000) Stream removed, broadcasting: 3\nI0519 11:41:26.720988 2114 log.go:172] (0xc000138160) (0xc000532c80) Stream removed, broadcasting: 5\n" May 19 11:41:26.728: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 11:41:26.728: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 19 11:41:36.771: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 19 11:41:46.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8pvl8 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:41:47.146: INFO: stderr: "I0519 11:41:47.054091 2137 log.go:172] (0xc0008622c0) (0xc00077a640) Create stream\nI0519 11:41:47.054159 2137 log.go:172] (0xc0008622c0) (0xc00077a640) Stream added, broadcasting: 1\nI0519 11:41:47.056428 2137 log.go:172] (0xc0008622c0) Reply frame received for 1\nI0519 11:41:47.056468 2137 log.go:172] (0xc0008622c0) (0xc000688f00) Create stream\nI0519 11:41:47.056477 2137 log.go:172] (0xc0008622c0) (0xc000688f00) Stream added, broadcasting: 3\nI0519 11:41:47.058011 2137 log.go:172] (0xc0008622c0) Reply frame received for 3\nI0519 11:41:47.058054 2137 log.go:172] (0xc0008622c0) (0xc000686000) Create stream\nI0519 11:41:47.058065 2137 log.go:172] (0xc0008622c0) (0xc000686000) Stream added, broadcasting: 5\nI0519 11:41:47.059047 2137 log.go:172] (0xc0008622c0) Reply frame received for 5\nI0519 11:41:47.138482 2137 log.go:172] (0xc0008622c0) Data frame received for 3\nI0519 11:41:47.138512 2137 log.go:172] (0xc000688f00) (3) Data frame handling\nI0519 11:41:47.138531 2137 log.go:172] (0xc000688f00) (3) Data frame sent\nI0519 11:41:47.138818 2137 log.go:172] (0xc0008622c0) Data frame received for 3\nI0519 11:41:47.138832 2137 log.go:172] (0xc000688f00) (3) Data frame handling\nI0519 11:41:47.138870 2137 log.go:172] (0xc0008622c0) Data frame received for 5\nI0519 11:41:47.138891 2137 log.go:172] (0xc000686000) (5) Data frame handling\nI0519 11:41:47.141001 2137 log.go:172] (0xc0008622c0) Data frame received for 1\nI0519 11:41:47.141023 2137 log.go:172] (0xc00077a640) (1) Data frame handling\nI0519 11:41:47.141043 2137 log.go:172] (0xc00077a640) (1) Data frame sent\nI0519 11:41:47.141069 2137 log.go:172] (0xc0008622c0) (0xc00077a640) Stream removed, broadcasting: 1\nI0519 11:41:47.141089 2137 log.go:172] (0xc0008622c0) Go away received\nI0519 11:41:47.141504 2137 log.go:172] (0xc0008622c0) (0xc00077a640) Stream removed, broadcasting: 1\nI0519 11:41:47.141526 2137 log.go:172] (0xc0008622c0) (0xc000688f00) Stream removed, broadcasting: 3\nI0519 11:41:47.141535 2137 log.go:172] (0xc0008622c0) (0xc000686000) Stream removed, broadcasting: 5\n" May 19 11:41:47.147: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 11:41:47.147: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 11:41:57.167: INFO: Waiting for StatefulSet e2e-tests-statefulset-8pvl8/ss2 to complete update May 19 11:41:57.167: INFO: Waiting for Pod e2e-tests-statefulset-8pvl8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 11:41:57.167: INFO: Waiting for Pod e2e-tests-statefulset-8pvl8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 11:41:57.167: INFO: Waiting for Pod e2e-tests-statefulset-8pvl8/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 11:42:07.176: INFO: Waiting for StatefulSet e2e-tests-statefulset-8pvl8/ss2 to complete update May 19 11:42:07.176: INFO: Waiting for Pod e2e-tests-statefulset-8pvl8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 11:42:07.176: INFO: Waiting for Pod e2e-tests-statefulset-8pvl8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 11:42:17.176: INFO: Waiting for StatefulSet e2e-tests-statefulset-8pvl8/ss2 to complete update May 19 11:42:17.176: INFO: Waiting for Pod e2e-tests-statefulset-8pvl8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 19 11:42:27.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8pvl8 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 11:42:27.539: INFO: stderr: "I0519 11:42:27.304939 2159 log.go:172] (0xc000138580) (0xc0005a7220) Create stream\nI0519 11:42:27.305722 2159 log.go:172] (0xc000138580) (0xc0005a7220) Stream added, broadcasting: 1\nI0519 11:42:27.308262 2159 log.go:172] (0xc000138580) Reply frame received for 1\nI0519 11:42:27.308322 2159 log.go:172] (0xc000138580) (0xc0006e2000) Create stream\nI0519 11:42:27.308340 2159 log.go:172] (0xc000138580) (0xc0006e2000) Stream added, broadcasting: 3\nI0519 11:42:27.309425 2159 log.go:172] (0xc000138580) Reply frame received for 3\nI0519 11:42:27.309481 2159 log.go:172] (0xc000138580) (0xc0005a72c0) Create stream\nI0519 11:42:27.309503 2159 log.go:172] (0xc000138580) (0xc0005a72c0) Stream added, broadcasting: 5\nI0519 11:42:27.310412 2159 log.go:172] (0xc000138580) Reply frame received for 5\nI0519 11:42:27.532227 2159 log.go:172] (0xc000138580) Data frame received for 3\nI0519 11:42:27.532251 2159 log.go:172] (0xc0006e2000) (3) Data frame handling\nI0519 11:42:27.532259 2159 log.go:172] (0xc0006e2000) (3) Data frame sent\nI0519 11:42:27.532263 2159 log.go:172] (0xc000138580) Data frame received for 3\nI0519 11:42:27.532267 2159 log.go:172] (0xc0006e2000) (3) Data frame handling\nI0519 11:42:27.532285 2159 log.go:172] (0xc000138580) Data frame received for 5\nI0519 11:42:27.532289 2159 log.go:172] (0xc0005a72c0) (5) Data frame handling\nI0519 11:42:27.533964 2159 log.go:172] (0xc000138580) Data frame received for 1\nI0519 11:42:27.533981 2159 log.go:172] (0xc0005a7220) (1) Data frame handling\nI0519 11:42:27.533989 2159 log.go:172] (0xc0005a7220) (1) Data frame sent\nI0519 11:42:27.534055 2159 log.go:172] (0xc000138580) (0xc0005a7220) Stream removed, broadcasting: 1\nI0519 11:42:27.534095 2159 log.go:172] (0xc000138580) Go away received\nI0519 11:42:27.534326 2159 log.go:172] (0xc000138580) (0xc0005a7220) Stream removed, broadcasting: 1\nI0519 11:42:27.534360 2159 log.go:172] (0xc000138580) (0xc0006e2000) Stream removed, broadcasting: 3\nI0519 11:42:27.534385 2159 log.go:172] (0xc000138580) (0xc0005a72c0) Stream removed, broadcasting: 5\n" May 19 11:42:27.539: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 11:42:27.539: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 11:42:37.592: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 19 11:42:47.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8pvl8 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 11:42:47.850: INFO: stderr: "I0519 11:42:47.782643 2181 log.go:172] (0xc000138630) (0xc00065d9a0) Create stream\nI0519 11:42:47.782702 2181 log.go:172] (0xc000138630) (0xc00065d9a0) Stream added, broadcasting: 1\nI0519 11:42:47.785382 2181 log.go:172] (0xc000138630) Reply frame received for 1\nI0519 11:42:47.785424 2181 log.go:172] (0xc000138630) (0xc0003ba640) Create stream\nI0519 11:42:47.785434 2181 log.go:172] (0xc000138630) (0xc0003ba640) Stream added, broadcasting: 3\nI0519 11:42:47.786262 2181 log.go:172] (0xc000138630) Reply frame received for 3\nI0519 11:42:47.786310 2181 log.go:172] (0xc000138630) (0xc000530000) Create stream\nI0519 11:42:47.786324 2181 log.go:172] (0xc000138630) (0xc000530000) Stream added, broadcasting: 5\nI0519 11:42:47.787157 2181 log.go:172] (0xc000138630) Reply frame received for 5\nI0519 11:42:47.844115 2181 log.go:172] (0xc000138630) Data frame received for 5\nI0519 11:42:47.844159 2181 log.go:172] (0xc000530000) (5) Data frame handling\nI0519 11:42:47.844192 2181 log.go:172] (0xc000138630) Data frame received for 3\nI0519 11:42:47.844216 2181 log.go:172] (0xc0003ba640) (3) Data frame handling\nI0519 11:42:47.844226 2181 log.go:172] (0xc0003ba640) (3) Data frame sent\nI0519 11:42:47.844250 2181 log.go:172] (0xc000138630) Data frame received for 3\nI0519 11:42:47.844257 2181 log.go:172] (0xc0003ba640) (3) Data frame handling\nI0519 11:42:47.845442 2181 log.go:172] (0xc000138630) Data frame received for 1\nI0519 11:42:47.845470 2181 log.go:172] (0xc00065d9a0) (1) Data frame handling\nI0519 11:42:47.845497 2181 log.go:172] (0xc00065d9a0) (1) Data frame sent\nI0519 11:42:47.845514 2181 log.go:172] (0xc000138630) (0xc00065d9a0) Stream removed, broadcasting: 1\nI0519 11:42:47.845538 2181 log.go:172] (0xc000138630) Go away received\nI0519 11:42:47.845710 2181 log.go:172] (0xc000138630) (0xc00065d9a0) Stream removed, broadcasting: 1\nI0519 11:42:47.845731 2181 log.go:172] (0xc000138630) (0xc0003ba640) Stream removed, broadcasting: 3\nI0519 11:42:47.845750 2181 log.go:172] (0xc000138630) (0xc000530000) Stream removed, broadcasting: 5\n" May 19 11:42:47.850: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 11:42:47.850: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 11:43:07.871: INFO: Waiting for StatefulSet e2e-tests-statefulset-8pvl8/ss2 to complete update May 19 11:43:07.871: INFO: Waiting for Pod e2e-tests-statefulset-8pvl8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 19 11:43:17.880: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8pvl8 May 19 11:43:17.883: INFO: Scaling statefulset ss2 to 0 May 19 11:43:47.905: INFO: Waiting for statefulset status.replicas updated to 0 May 19 11:43:47.908: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:43:47.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-8pvl8" for this suite. May 19 11:43:57.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:43:58.023: INFO: namespace: e2e-tests-statefulset-8pvl8, resource: bindings, ignored listing per whitelist May 19 11:43:58.121: INFO: namespace e2e-tests-statefulset-8pvl8 deletion completed in 10.196871878s • [SLOW TEST:171.861 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:43:58.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:44:20.345: INFO: Container started at 2020-05-19 11:44:02 +0000 UTC, pod became ready at 2020-05-19 11:44:18 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:44:20.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-sdxbw" for this suite. May 19 11:44:42.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:44:42.425: INFO: namespace: e2e-tests-container-probe-sdxbw, resource: bindings, ignored listing per whitelist May 19 11:44:42.445: INFO: namespace e2e-tests-container-probe-sdxbw deletion completed in 22.098090237s • [SLOW TEST:44.324 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:44:42.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:44:46.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-fxd9c" for this suite. May 19 11:45:36.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:45:36.673: INFO: namespace: e2e-tests-kubelet-test-fxd9c, resource: bindings, ignored listing per whitelist May 19 11:45:36.715: INFO: namespace e2e-tests-kubelet-test-fxd9c deletion completed in 50.090560709s • [SLOW TEST:54.269 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:45:36.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 19 11:45:44.987: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 11:45:44.994: INFO: Pod pod-with-prestop-http-hook still exists May 19 11:45:46.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 11:45:46.997: INFO: Pod pod-with-prestop-http-hook still exists May 19 11:45:48.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 11:45:48.998: INFO: Pod pod-with-prestop-http-hook still exists May 19 11:45:50.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 11:45:50.998: INFO: Pod pod-with-prestop-http-hook still exists May 19 11:45:52.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 11:45:52.998: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:45:53.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4x5qm" for this suite. May 19 11:46:15.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:46:15.167: INFO: namespace: e2e-tests-container-lifecycle-hook-4x5qm, resource: bindings, ignored listing per whitelist May 19 11:46:15.167: INFO: namespace e2e-tests-container-lifecycle-hook-4x5qm deletion completed in 22.155068533s • [SLOW TEST:38.451 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:46:15.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:46:15.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5851c41a-99c6-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-77cvv" to be "success or failure" May 19 11:46:15.278: INFO: Pod "downwardapi-volume-5851c41a-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.101027ms May 19 11:46:17.282: INFO: Pod "downwardapi-volume-5851c41a-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021215806s May 19 11:46:19.426: INFO: Pod "downwardapi-volume-5851c41a-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165494317s May 19 11:46:21.430: INFO: Pod "downwardapi-volume-5851c41a-99c6-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169202938s STEP: Saw pod success May 19 11:46:21.430: INFO: Pod "downwardapi-volume-5851c41a-99c6-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:46:21.433: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5851c41a-99c6-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:46:21.486: INFO: Waiting for pod downwardapi-volume-5851c41a-99c6-11ea-abcb-0242ac110018 to disappear May 19 11:46:21.504: INFO: Pod downwardapi-volume-5851c41a-99c6-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:46:21.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-77cvv" for this suite. May 19 11:46:27.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:46:27.538: INFO: namespace: e2e-tests-projected-77cvv, resource: bindings, ignored listing per whitelist May 19 11:46:27.583: INFO: namespace e2e-tests-projected-77cvv deletion completed in 6.076082091s • [SLOW TEST:12.416 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:46:27.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-5fc41606-99c6-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 11:46:27.767: INFO: Waiting up to 5m0s for pod "pod-configmaps-5fc5eb12-99c6-11ea-abcb-0242ac110018" in namespace "e2e-tests-configmap-qvknz" to be "success or failure" May 19 11:46:27.772: INFO: Pod "pod-configmaps-5fc5eb12-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299318ms May 19 11:46:29.776: INFO: Pod "pod-configmaps-5fc5eb12-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008304275s May 19 11:46:31.791: INFO: Pod "pod-configmaps-5fc5eb12-99c6-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023926174s STEP: Saw pod success May 19 11:46:31.791: INFO: Pod "pod-configmaps-5fc5eb12-99c6-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:46:31.795: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-5fc5eb12-99c6-11ea-abcb-0242ac110018 container configmap-volume-test: STEP: delete the pod May 19 11:46:31.815: INFO: Waiting for pod pod-configmaps-5fc5eb12-99c6-11ea-abcb-0242ac110018 to disappear May 19 11:46:31.832: INFO: Pod pod-configmaps-5fc5eb12-99c6-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:46:31.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qvknz" for this suite. May 19 11:46:37.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:46:37.905: INFO: namespace: e2e-tests-configmap-qvknz, resource: bindings, ignored listing per whitelist May 19 11:46:37.926: INFO: namespace e2e-tests-configmap-qvknz deletion completed in 6.091328332s • [SLOW TEST:10.343 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:46:37.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:46:38.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65f04d10-99c6-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-77j6v" to be "success or failure" May 19 11:46:38.284: INFO: Pod "downwardapi-volume-65f04d10-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 82.256805ms May 19 11:46:40.408: INFO: Pod "downwardapi-volume-65f04d10-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206386156s May 19 11:46:42.412: INFO: Pod "downwardapi-volume-65f04d10-99c6-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.210825354s STEP: Saw pod success May 19 11:46:42.412: INFO: Pod "downwardapi-volume-65f04d10-99c6-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:46:42.415: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-65f04d10-99c6-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:46:42.852: INFO: Waiting for pod downwardapi-volume-65f04d10-99c6-11ea-abcb-0242ac110018 to disappear May 19 11:46:42.911: INFO: Pod downwardapi-volume-65f04d10-99c6-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:46:42.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-77j6v" for this suite. May 19 11:46:48.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:46:48.976: INFO: namespace: e2e-tests-downward-api-77j6v, resource: bindings, ignored listing per whitelist May 19 11:46:49.014: INFO: namespace e2e-tests-downward-api-77j6v deletion completed in 6.099887984s • [SLOW TEST:11.087 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:46:49.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 19 11:46:49.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:49.802: INFO: stderr: "" May 19 11:46:49.802: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 11:46:49.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:50.203: INFO: stderr: "" May 19 11:46:50.203: INFO: stdout: "update-demo-nautilus-8nfnt update-demo-nautilus-h79sk " May 19 11:46:50.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nfnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:50.386: INFO: stderr: "" May 19 11:46:50.386: INFO: stdout: "" May 19 11:46:50.386: INFO: update-demo-nautilus-8nfnt is created but not running May 19 11:46:55.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:55.530: INFO: stderr: "" May 19 11:46:55.530: INFO: stdout: "update-demo-nautilus-8nfnt update-demo-nautilus-h79sk " May 19 11:46:55.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nfnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:55.646: INFO: stderr: "" May 19 11:46:55.646: INFO: stdout: "true" May 19 11:46:55.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nfnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:55.746: INFO: stderr: "" May 19 11:46:55.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 11:46:55.746: INFO: validating pod update-demo-nautilus-8nfnt May 19 11:46:55.750: INFO: got data: { "image": "nautilus.jpg" } May 19 11:46:55.750: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 11:46:55.750: INFO: update-demo-nautilus-8nfnt is verified up and running May 19 11:46:55.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h79sk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:55.856: INFO: stderr: "" May 19 11:46:55.856: INFO: stdout: "true" May 19 11:46:55.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h79sk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:55.951: INFO: stderr: "" May 19 11:46:55.951: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 11:46:55.951: INFO: validating pod update-demo-nautilus-h79sk May 19 11:46:55.954: INFO: got data: { "image": "nautilus.jpg" } May 19 11:46:55.954: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 11:46:55.954: INFO: update-demo-nautilus-h79sk is verified up and running STEP: using delete to clean up resources May 19 11:46:55.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:56.060: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 11:46:56.060: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 19 11:46:56.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-8dlvc' May 19 11:46:56.446: INFO: stderr: "No resources found.\n" May 19 11:46:56.446: INFO: stdout: "" May 19 11:46:56.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-8dlvc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 11:46:56.820: INFO: stderr: "" May 19 11:46:56.820: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:46:56.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8dlvc" for this suite. May 19 11:47:21.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:47:21.291: INFO: namespace: e2e-tests-kubectl-8dlvc, resource: bindings, ignored listing per whitelist May 19 11:47:21.291: INFO: namespace e2e-tests-kubectl-8dlvc deletion completed in 24.457931661s • [SLOW TEST:32.276 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:47:21.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:47:21.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7fbe1529-99c6-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-ldg7b" to be "success or failure" May 19 11:47:21.410: INFO: Pod "downwardapi-volume-7fbe1529-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163574ms May 19 11:47:23.413: INFO: Pod "downwardapi-volume-7fbe1529-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008039848s May 19 11:47:25.516: INFO: Pod "downwardapi-volume-7fbe1529-99c6-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110175418s STEP: Saw pod success May 19 11:47:25.516: INFO: Pod "downwardapi-volume-7fbe1529-99c6-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:47:25.518: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7fbe1529-99c6-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:47:25.648: INFO: Waiting for pod downwardapi-volume-7fbe1529-99c6-11ea-abcb-0242ac110018 to disappear May 19 11:47:25.665: INFO: Pod downwardapi-volume-7fbe1529-99c6-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:47:25.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ldg7b" for this suite. May 19 11:47:31.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:47:31.812: INFO: namespace: e2e-tests-projected-ldg7b, resource: bindings, ignored listing per whitelist May 19 11:47:31.818: INFO: namespace e2e-tests-projected-ldg7b deletion completed in 6.151198245s • [SLOW TEST:10.527 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:47:31.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0519 11:47:33.007556 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 11:47:33.007: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:47:33.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8knll" for this suite. May 19 11:47:41.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:47:41.239: INFO: namespace: e2e-tests-gc-8knll, resource: bindings, ignored listing per whitelist May 19 11:47:41.258: INFO: namespace e2e-tests-gc-8knll deletion completed in 8.247490931s • [SLOW TEST:9.439 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:47:41.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-kr8pt.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-kr8pt.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-kr8pt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-kr8pt.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-kr8pt.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-kr8pt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 11:47:52.534: INFO: DNS probes using e2e-tests-dns-kr8pt/dns-test-8c1c8b3f-99c6-11ea-abcb-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:47:52.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-kr8pt" for this suite. May 19 11:48:01.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:48:01.089: INFO: namespace: e2e-tests-dns-kr8pt, resource: bindings, ignored listing per whitelist May 19 11:48:01.113: INFO: namespace e2e-tests-dns-kr8pt deletion completed in 8.147940337s • [SLOW TEST:19.856 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:48:01.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-977aa00a-99c6-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 11:48:01.250: INFO: Waiting up to 5m0s for pod "pod-secrets-977b2dfa-99c6-11ea-abcb-0242ac110018" in namespace "e2e-tests-secrets-57clf" to be "success or failure" May 19 11:48:01.265: INFO: Pod "pod-secrets-977b2dfa-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.050182ms May 19 11:48:03.268: INFO: Pod "pod-secrets-977b2dfa-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018636494s May 19 11:48:05.361: INFO: Pod "pod-secrets-977b2dfa-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111442638s May 19 11:48:07.366: INFO: Pod "pod-secrets-977b2dfa-99c6-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115750153s STEP: Saw pod success May 19 11:48:07.366: INFO: Pod "pod-secrets-977b2dfa-99c6-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:48:07.368: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-977b2dfa-99c6-11ea-abcb-0242ac110018 container secret-env-test: STEP: delete the pod May 19 11:48:07.459: INFO: Waiting for pod pod-secrets-977b2dfa-99c6-11ea-abcb-0242ac110018 to disappear May 19 11:48:07.485: INFO: Pod pod-secrets-977b2dfa-99c6-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:48:07.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-57clf" for this suite. May 19 11:48:13.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:48:13.516: INFO: namespace: e2e-tests-secrets-57clf, resource: bindings, ignored listing per whitelist May 19 11:48:13.569: INFO: namespace e2e-tests-secrets-57clf deletion completed in 6.081458231s • [SLOW TEST:12.456 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:48:13.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:48:13.879: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ef4b707-99c6-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-cx5wc" to be "success or failure" May 19 11:48:13.896: INFO: Pod "downwardapi-volume-9ef4b707-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.631857ms May 19 11:48:16.158: INFO: Pod "downwardapi-volume-9ef4b707-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278487231s May 19 11:48:18.162: INFO: Pod "downwardapi-volume-9ef4b707-99c6-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282714179s May 19 11:48:20.166: INFO: Pod "downwardapi-volume-9ef4b707-99c6-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.286303285s STEP: Saw pod success May 19 11:48:20.166: INFO: Pod "downwardapi-volume-9ef4b707-99c6-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:48:20.168: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9ef4b707-99c6-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:48:20.228: INFO: Waiting for pod downwardapi-volume-9ef4b707-99c6-11ea-abcb-0242ac110018 to disappear May 19 11:48:20.266: INFO: Pod downwardapi-volume-9ef4b707-99c6-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:48:20.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cx5wc" for this suite. May 19 11:48:26.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:48:26.331: INFO: namespace: e2e-tests-projected-cx5wc, resource: bindings, ignored listing per whitelist May 19 11:48:26.376: INFO: namespace e2e-tests-projected-cx5wc deletion completed in 6.105575641s • [SLOW TEST:12.806 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:48:26.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8p7vg STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 11:48:26.530: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 11:48:52.767: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.237:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8p7vg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:48:52.767: INFO: >>> kubeConfig: /root/.kube/config I0519 11:48:52.806792 6 log.go:172] (0xc001962420) (0xc001ef70e0) Create stream I0519 11:48:52.806977 6 log.go:172] (0xc001962420) (0xc001ef70e0) Stream added, broadcasting: 1 I0519 11:48:52.808736 6 log.go:172] (0xc001962420) Reply frame received for 1 I0519 11:48:52.808777 6 log.go:172] (0xc001962420) (0xc002897900) Create stream I0519 11:48:52.808787 6 log.go:172] (0xc001962420) (0xc002897900) Stream added, broadcasting: 3 I0519 11:48:52.809757 6 log.go:172] (0xc001962420) Reply frame received for 3 I0519 11:48:52.809791 6 log.go:172] (0xc001962420) (0xc0028b2960) Create stream I0519 11:48:52.809802 6 log.go:172] (0xc001962420) (0xc0028b2960) Stream added, broadcasting: 5 I0519 11:48:52.810547 6 log.go:172] (0xc001962420) Reply frame received for 5 I0519 11:48:52.879783 6 log.go:172] (0xc001962420) Data frame received for 3 I0519 11:48:52.879827 6 log.go:172] (0xc002897900) (3) Data frame handling I0519 11:48:52.879845 6 log.go:172] (0xc002897900) (3) Data frame sent I0519 11:48:52.879853 6 log.go:172] (0xc001962420) Data frame received for 3 I0519 11:48:52.879858 6 log.go:172] (0xc002897900) (3) Data frame handling I0519 11:48:52.880121 6 log.go:172] (0xc001962420) Data frame received for 5 I0519 11:48:52.880136 6 log.go:172] (0xc0028b2960) (5) Data frame handling I0519 11:48:52.881765 6 log.go:172] (0xc001962420) Data frame received for 1 I0519 11:48:52.881809 6 log.go:172] (0xc001ef70e0) (1) Data frame handling I0519 11:48:52.881828 6 log.go:172] (0xc001ef70e0) (1) Data frame sent I0519 11:48:52.881846 6 log.go:172] (0xc001962420) (0xc001ef70e0) Stream removed, broadcasting: 1 I0519 11:48:52.881882 6 log.go:172] (0xc001962420) Go away received I0519 11:48:52.882033 6 log.go:172] (0xc001962420) (0xc001ef70e0) Stream removed, broadcasting: 1 I0519 11:48:52.882075 6 log.go:172] (0xc001962420) (0xc002897900) Stream removed, broadcasting: 3 I0519 11:48:52.882087 6 log.go:172] (0xc001962420) (0xc0028b2960) Stream removed, broadcasting: 5 May 19 11:48:52.882: INFO: Found all expected endpoints: [netserver-0] May 19 11:48:52.885: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.217:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8p7vg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:48:52.885: INFO: >>> kubeConfig: /root/.kube/config I0519 11:48:52.913603 6 log.go:172] (0xc0000ebd90) (0xc0015940a0) Create stream I0519 11:48:52.913639 6 log.go:172] (0xc0000ebd90) (0xc0015940a0) Stream added, broadcasting: 1 I0519 11:48:52.915213 6 log.go:172] (0xc0000ebd90) Reply frame received for 1 I0519 11:48:52.915262 6 log.go:172] (0xc0000ebd90) (0xc002412000) Create stream I0519 11:48:52.915275 6 log.go:172] (0xc0000ebd90) (0xc002412000) Stream added, broadcasting: 3 I0519 11:48:52.916119 6 log.go:172] (0xc0000ebd90) Reply frame received for 3 I0519 11:48:52.916141 6 log.go:172] (0xc0000ebd90) (0xc001594140) Create stream I0519 11:48:52.916151 6 log.go:172] (0xc0000ebd90) (0xc001594140) Stream added, broadcasting: 5 I0519 11:48:52.916925 6 log.go:172] (0xc0000ebd90) Reply frame received for 5 I0519 11:48:52.982318 6 log.go:172] (0xc0000ebd90) Data frame received for 3 I0519 11:48:52.982360 6 log.go:172] (0xc002412000) (3) Data frame handling I0519 11:48:52.982391 6 log.go:172] (0xc002412000) (3) Data frame sent I0519 11:48:52.982637 6 log.go:172] (0xc0000ebd90) Data frame received for 3 I0519 11:48:52.982683 6 log.go:172] (0xc002412000) (3) Data frame handling I0519 11:48:52.982725 6 log.go:172] (0xc0000ebd90) Data frame received for 5 I0519 11:48:52.982771 6 log.go:172] (0xc001594140) (5) Data frame handling I0519 11:48:52.984299 6 log.go:172] (0xc0000ebd90) Data frame received for 1 I0519 11:48:52.984331 6 log.go:172] (0xc0015940a0) (1) Data frame handling I0519 11:48:52.984378 6 log.go:172] (0xc0015940a0) (1) Data frame sent I0519 11:48:52.984407 6 log.go:172] (0xc0000ebd90) (0xc0015940a0) Stream removed, broadcasting: 1 I0519 11:48:52.984435 6 log.go:172] (0xc0000ebd90) Go away received I0519 11:48:52.984571 6 log.go:172] (0xc0000ebd90) (0xc0015940a0) Stream removed, broadcasting: 1 I0519 11:48:52.984604 6 log.go:172] (0xc0000ebd90) (0xc002412000) Stream removed, broadcasting: 3 I0519 11:48:52.984630 6 log.go:172] (0xc0000ebd90) (0xc001594140) Stream removed, broadcasting: 5 May 19 11:48:52.984: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:48:52.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-8p7vg" for this suite. May 19 11:49:15.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:49:15.045: INFO: namespace: e2e-tests-pod-network-test-8p7vg, resource: bindings, ignored listing per whitelist May 19 11:49:15.062: INFO: namespace e2e-tests-pod-network-test-8p7vg deletion completed in 22.072752597s • [SLOW TEST:48.686 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:49:15.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 11:49:15.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-hzwvb' May 19 11:49:19.302: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 11:49:19.302: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 19 11:49:19.319: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 19 11:49:19.402: INFO: scanned /root for discovery docs: May 19 11:49:19.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-hzwvb' May 19 11:49:36.620: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 19 11:49:36.620: INFO: stdout: "Created e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0\nScaling up e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 19 11:49:36.621: INFO: stdout: "Created e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0\nScaling up e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 19 11:49:36.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hzwvb' May 19 11:49:36.729: INFO: stderr: "" May 19 11:49:36.729: INFO: stdout: "e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0-twzn5 " May 19 11:49:36.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0-twzn5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hzwvb' May 19 11:49:36.822: INFO: stderr: "" May 19 11:49:36.822: INFO: stdout: "true" May 19 11:49:36.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0-twzn5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hzwvb' May 19 11:49:36.918: INFO: stderr: "" May 19 11:49:36.918: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 19 11:49:36.918: INFO: e2e-test-nginx-rc-dedc80563a1c4f6991b2afcb648aa4d0-twzn5 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 19 11:49:36.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hzwvb' May 19 11:49:37.038: INFO: stderr: "" May 19 11:49:37.038: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:49:37.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hzwvb" for this suite. May 19 11:49:43.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:49:43.163: INFO: namespace: e2e-tests-kubectl-hzwvb, resource: bindings, ignored listing per whitelist May 19 11:49:43.206: INFO: namespace e2e-tests-kubectl-hzwvb deletion completed in 6.094162013s • [SLOW TEST:28.143 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:49:43.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 19 11:49:43.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 19 11:49:43.407: INFO: stderr: "" May 19 11:49:43.407: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:49:43.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gpxdg" for this suite. May 19 11:49:49.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:49:49.505: INFO: namespace: e2e-tests-kubectl-gpxdg, resource: bindings, ignored listing per whitelist May 19 11:49:49.550: INFO: namespace e2e-tests-kubectl-gpxdg deletion completed in 6.13953077s • [SLOW TEST:6.344 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:49:49.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 19 11:49:49.732: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-g9x5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-g9x5x/configmaps/e2e-watch-test-label-changed,UID:d8241407-99c6-11ea-99e8-0242ac110002,ResourceVersion:11396679,Generation:0,CreationTimestamp:2020-05-19 11:49:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 11:49:49.732: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-g9x5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-g9x5x/configmaps/e2e-watch-test-label-changed,UID:d8241407-99c6-11ea-99e8-0242ac110002,ResourceVersion:11396680,Generation:0,CreationTimestamp:2020-05-19 11:49:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 19 11:49:49.732: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-g9x5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-g9x5x/configmaps/e2e-watch-test-label-changed,UID:d8241407-99c6-11ea-99e8-0242ac110002,ResourceVersion:11396681,Generation:0,CreationTimestamp:2020-05-19 11:49:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 19 11:49:59.790: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-g9x5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-g9x5x/configmaps/e2e-watch-test-label-changed,UID:d8241407-99c6-11ea-99e8-0242ac110002,ResourceVersion:11396702,Generation:0,CreationTimestamp:2020-05-19 11:49:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 11:49:59.790: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-g9x5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-g9x5x/configmaps/e2e-watch-test-label-changed,UID:d8241407-99c6-11ea-99e8-0242ac110002,ResourceVersion:11396703,Generation:0,CreationTimestamp:2020-05-19 11:49:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 19 11:49:59.790: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-g9x5x,SelfLink:/api/v1/namespaces/e2e-tests-watch-g9x5x/configmaps/e2e-watch-test-label-changed,UID:d8241407-99c6-11ea-99e8-0242ac110002,ResourceVersion:11396704,Generation:0,CreationTimestamp:2020-05-19 11:49:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:49:59.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-g9x5x" for this suite. May 19 11:50:05.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:50:05.829: INFO: namespace: e2e-tests-watch-g9x5x, resource: bindings, ignored listing per whitelist May 19 11:50:05.880: INFO: namespace e2e-tests-watch-g9x5x deletion completed in 6.079570885s • [SLOW TEST:16.330 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:50:05.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 19 11:50:06.053: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:06.056: INFO: Number of nodes with available pods: 0 May 19 11:50:06.056: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:07.060: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:07.063: INFO: Number of nodes with available pods: 0 May 19 11:50:07.063: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:08.768: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:08.771: INFO: Number of nodes with available pods: 0 May 19 11:50:08.771: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:09.077: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:09.080: INFO: Number of nodes with available pods: 0 May 19 11:50:09.080: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:10.062: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:10.066: INFO: Number of nodes with available pods: 0 May 19 11:50:10.066: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:11.061: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:11.064: INFO: Number of nodes with available pods: 1 May 19 11:50:11.064: INFO: Node hunter-worker2 is running more than one daemon pod May 19 11:50:12.149: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:12.152: INFO: Number of nodes with available pods: 2 May 19 11:50:12.152: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 19 11:50:12.237: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:12.242: INFO: Number of nodes with available pods: 1 May 19 11:50:12.242: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:13.304: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:13.465: INFO: Number of nodes with available pods: 1 May 19 11:50:13.465: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:14.317: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:14.321: INFO: Number of nodes with available pods: 1 May 19 11:50:14.321: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:15.248: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:15.251: INFO: Number of nodes with available pods: 1 May 19 11:50:15.251: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:16.247: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:16.251: INFO: Number of nodes with available pods: 1 May 19 11:50:16.251: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:17.247: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:17.250: INFO: Number of nodes with available pods: 1 May 19 11:50:17.250: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:18.248: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:18.251: INFO: Number of nodes with available pods: 1 May 19 11:50:18.251: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:19.248: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:19.257: INFO: Number of nodes with available pods: 1 May 19 11:50:19.257: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:20.248: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:20.252: INFO: Number of nodes with available pods: 1 May 19 11:50:20.252: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:21.246: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:21.250: INFO: Number of nodes with available pods: 1 May 19 11:50:21.250: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:22.247: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:22.251: INFO: Number of nodes with available pods: 1 May 19 11:50:22.251: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:23.614: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:23.673: INFO: Number of nodes with available pods: 1 May 19 11:50:23.673: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:24.286: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:24.289: INFO: Number of nodes with available pods: 1 May 19 11:50:24.289: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:25.246: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:25.250: INFO: Number of nodes with available pods: 1 May 19 11:50:25.250: INFO: Node hunter-worker is running more than one daemon pod May 19 11:50:26.247: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 11:50:26.250: INFO: Number of nodes with available pods: 2 May 19 11:50:26.250: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-88pzm, will wait for the garbage collector to delete the pods May 19 11:50:26.312: INFO: Deleting DaemonSet.extensions daemon-set took: 6.59202ms May 19 11:50:26.412: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.235562ms May 19 11:50:41.316: INFO: Number of nodes with available pods: 0 May 19 11:50:41.316: INFO: Number of running nodes: 0, number of available pods: 0 May 19 11:50:41.339: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-88pzm/daemonsets","resourceVersion":"11396843"},"items":null} May 19 11:50:41.343: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-88pzm/pods","resourceVersion":"11396843"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:50:41.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-88pzm" for this suite. May 19 11:50:47.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:50:47.432: INFO: namespace: e2e-tests-daemonsets-88pzm, resource: bindings, ignored listing per whitelist May 19 11:50:47.445: INFO: namespace e2e-tests-daemonsets-88pzm deletion completed in 6.089590568s • [SLOW TEST:41.565 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:50:47.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-lx5lf STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-lx5lf STEP: Deleting pre-stop pod May 19 11:51:00.612: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:51:00.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-lx5lf" for this suite. May 19 11:51:38.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:51:38.683: INFO: namespace: e2e-tests-prestop-lx5lf, resource: bindings, ignored listing per whitelist May 19 11:51:38.717: INFO: namespace e2e-tests-prestop-lx5lf deletion completed in 38.089471466s • [SLOW TEST:51.272 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:51:38.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 19 11:51:38.862: INFO: Waiting up to 5m0s for pod "pod-193381c6-99c7-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-5kt29" to be "success or failure" May 19 11:51:38.878: INFO: Pod "pod-193381c6-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.542881ms May 19 11:51:40.882: INFO: Pod "pod-193381c6-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019853015s May 19 11:51:42.886: INFO: Pod "pod-193381c6-99c7-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024411795s STEP: Saw pod success May 19 11:51:42.886: INFO: Pod "pod-193381c6-99c7-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:51:42.889: INFO: Trying to get logs from node hunter-worker pod pod-193381c6-99c7-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:51:43.039: INFO: Waiting for pod pod-193381c6-99c7-11ea-abcb-0242ac110018 to disappear May 19 11:51:43.072: INFO: Pod pod-193381c6-99c7-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:51:43.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5kt29" for this suite. May 19 11:51:49.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:51:49.119: INFO: namespace: e2e-tests-emptydir-5kt29, resource: bindings, ignored listing per whitelist May 19 11:51:49.154: INFO: namespace e2e-tests-emptydir-5kt29 deletion completed in 6.077156997s • [SLOW TEST:10.436 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:51:49.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 19 11:51:49.478: INFO: Pod name pod-release: Found 0 pods out of 1 May 19 11:51:54.483: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:51:55.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-xm5vg" for this suite. May 19 11:52:01.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:52:01.627: INFO: namespace: e2e-tests-replication-controller-xm5vg, resource: bindings, ignored listing per whitelist May 19 11:52:01.690: INFO: namespace e2e-tests-replication-controller-xm5vg deletion completed in 6.157573391s • [SLOW TEST:12.536 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:52:01.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:52:03.170: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 19 11:52:03.246: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mvcss/daemonsets","resourceVersion":"11397141"},"items":null} May 19 11:52:03.249: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mvcss/pods","resourceVersion":"11397141"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:52:03.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mvcss" for this suite. May 19 11:52:09.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:52:09.419: INFO: namespace: e2e-tests-daemonsets-mvcss, resource: bindings, ignored listing per whitelist May 19 11:52:09.438: INFO: namespace e2e-tests-daemonsets-mvcss deletion completed in 6.177752047s S [SKIPPING] [7.748 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 11:52:03.170: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:52:09.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2b7b51dc-99c7-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 11:52:09.554: INFO: Waiting up to 5m0s for pod "pod-secrets-2b7bfa9b-99c7-11ea-abcb-0242ac110018" in namespace "e2e-tests-secrets-mr55j" to be "success or failure" May 19 11:52:09.598: INFO: Pod "pod-secrets-2b7bfa9b-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 44.149906ms May 19 11:52:11.601: INFO: Pod "pod-secrets-2b7bfa9b-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047176144s May 19 11:52:13.606: INFO: Pod "pod-secrets-2b7bfa9b-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05172103s May 19 11:52:15.610: INFO: Pod "pod-secrets-2b7bfa9b-99c7-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056191186s STEP: Saw pod success May 19 11:52:15.610: INFO: Pod "pod-secrets-2b7bfa9b-99c7-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:52:15.613: INFO: Trying to get logs from node hunter-worker pod pod-secrets-2b7bfa9b-99c7-11ea-abcb-0242ac110018 container secret-volume-test: STEP: delete the pod May 19 11:52:15.652: INFO: Waiting for pod pod-secrets-2b7bfa9b-99c7-11ea-abcb-0242ac110018 to disappear May 19 11:52:15.658: INFO: Pod pod-secrets-2b7bfa9b-99c7-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:52:15.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mr55j" for this suite. May 19 11:52:21.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:52:21.736: INFO: namespace: e2e-tests-secrets-mr55j, resource: bindings, ignored listing per whitelist May 19 11:52:21.745: INFO: namespace e2e-tests-secrets-mr55j deletion completed in 6.083306s • [SLOW TEST:12.307 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:52:21.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 19 11:52:21.907: INFO: Waiting up to 5m0s for pod "client-containers-32d94afc-99c7-11ea-abcb-0242ac110018" in namespace "e2e-tests-containers-nnx89" to be "success or failure" May 19 11:52:21.982: INFO: Pod "client-containers-32d94afc-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 74.744864ms May 19 11:52:23.985: INFO: Pod "client-containers-32d94afc-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0784193s May 19 11:52:25.989: INFO: Pod "client-containers-32d94afc-99c7-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.082329526s May 19 11:52:27.994: INFO: Pod "client-containers-32d94afc-99c7-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087172123s STEP: Saw pod success May 19 11:52:27.994: INFO: Pod "client-containers-32d94afc-99c7-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:52:27.997: INFO: Trying to get logs from node hunter-worker pod client-containers-32d94afc-99c7-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:52:28.017: INFO: Waiting for pod client-containers-32d94afc-99c7-11ea-abcb-0242ac110018 to disappear May 19 11:52:28.022: INFO: Pod client-containers-32d94afc-99c7-11ea-abcb-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:52:28.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-nnx89" for this suite. May 19 11:52:34.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:52:34.252: INFO: namespace: e2e-tests-containers-nnx89, resource: bindings, ignored listing per whitelist May 19 11:52:34.277: INFO: namespace e2e-tests-containers-nnx89 deletion completed in 6.251656684s • [SLOW TEST:12.532 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:52:34.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 19 11:52:46.516: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:46.517: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:46.552210 6 log.go:172] (0xc00214e2c0) (0xc001a1ff40) Create stream I0519 11:52:46.552247 6 log.go:172] (0xc00214e2c0) (0xc001a1ff40) Stream added, broadcasting: 1 I0519 11:52:46.555082 6 log.go:172] (0xc00214e2c0) Reply frame received for 1 I0519 11:52:46.555143 6 log.go:172] (0xc00214e2c0) (0xc001b8e460) Create stream I0519 11:52:46.555162 6 log.go:172] (0xc00214e2c0) (0xc001b8e460) Stream added, broadcasting: 3 I0519 11:52:46.556103 6 log.go:172] (0xc00214e2c0) Reply frame received for 3 I0519 11:52:46.556130 6 log.go:172] (0xc00214e2c0) (0xc000f27400) Create stream I0519 11:52:46.556141 6 log.go:172] (0xc00214e2c0) (0xc000f27400) Stream added, broadcasting: 5 I0519 11:52:46.557031 6 log.go:172] (0xc00214e2c0) Reply frame received for 5 I0519 11:52:46.633772 6 log.go:172] (0xc00214e2c0) Data frame received for 5 I0519 11:52:46.633795 6 log.go:172] (0xc000f27400) (5) Data frame handling I0519 11:52:46.633834 6 log.go:172] (0xc00214e2c0) Data frame received for 3 I0519 11:52:46.633879 6 log.go:172] (0xc001b8e460) (3) Data frame handling I0519 11:52:46.633926 6 log.go:172] (0xc001b8e460) (3) Data frame sent I0519 11:52:46.633979 6 log.go:172] (0xc00214e2c0) Data frame received for 3 I0519 11:52:46.634012 6 log.go:172] (0xc001b8e460) (3) Data frame handling I0519 11:52:46.635189 6 log.go:172] (0xc00214e2c0) Data frame received for 1 I0519 11:52:46.635226 6 log.go:172] (0xc001a1ff40) (1) Data frame handling I0519 11:52:46.635262 6 log.go:172] (0xc001a1ff40) (1) Data frame sent I0519 11:52:46.635282 6 log.go:172] (0xc00214e2c0) (0xc001a1ff40) Stream removed, broadcasting: 1 I0519 11:52:46.635353 6 log.go:172] (0xc00214e2c0) Go away received I0519 11:52:46.635479 6 log.go:172] (0xc00214e2c0) (0xc001a1ff40) Stream removed, broadcasting: 1 I0519 11:52:46.635512 6 log.go:172] (0xc00214e2c0) (0xc001b8e460) Stream removed, broadcasting: 3 I0519 11:52:46.635529 6 log.go:172] (0xc00214e2c0) (0xc000f27400) Stream removed, broadcasting: 5 May 19 11:52:46.635: INFO: Exec stderr: "" May 19 11:52:46.635: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:46.635: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:46.666019 6 log.go:172] (0xc00214e790) (0xc0020a61e0) Create stream I0519 11:52:46.666052 6 log.go:172] (0xc00214e790) (0xc0020a61e0) Stream added, broadcasting: 1 I0519 11:52:46.670488 6 log.go:172] (0xc00214e790) Reply frame received for 1 I0519 11:52:46.670541 6 log.go:172] (0xc00214e790) (0xc001ef7900) Create stream I0519 11:52:46.670557 6 log.go:172] (0xc00214e790) (0xc001ef7900) Stream added, broadcasting: 3 I0519 11:52:46.671523 6 log.go:172] (0xc00214e790) Reply frame received for 3 I0519 11:52:46.671559 6 log.go:172] (0xc00214e790) (0xc001f03400) Create stream I0519 11:52:46.671573 6 log.go:172] (0xc00214e790) (0xc001f03400) Stream added, broadcasting: 5 I0519 11:52:46.672524 6 log.go:172] (0xc00214e790) Reply frame received for 5 I0519 11:52:46.733449 6 log.go:172] (0xc00214e790) Data frame received for 3 I0519 11:52:46.733491 6 log.go:172] (0xc001ef7900) (3) Data frame handling I0519 11:52:46.733518 6 log.go:172] (0xc001ef7900) (3) Data frame sent I0519 11:52:46.733646 6 log.go:172] (0xc00214e790) Data frame received for 5 I0519 11:52:46.733668 6 log.go:172] (0xc001f03400) (5) Data frame handling I0519 11:52:46.733709 6 log.go:172] (0xc00214e790) Data frame received for 3 I0519 11:52:46.733729 6 log.go:172] (0xc001ef7900) (3) Data frame handling I0519 11:52:46.735280 6 log.go:172] (0xc00214e790) Data frame received for 1 I0519 11:52:46.735301 6 log.go:172] (0xc0020a61e0) (1) Data frame handling I0519 11:52:46.735309 6 log.go:172] (0xc0020a61e0) (1) Data frame sent I0519 11:52:46.735319 6 log.go:172] (0xc00214e790) (0xc0020a61e0) Stream removed, broadcasting: 1 I0519 11:52:46.735387 6 log.go:172] (0xc00214e790) Go away received I0519 11:52:46.735469 6 log.go:172] (0xc00214e790) (0xc0020a61e0) Stream removed, broadcasting: 1 I0519 11:52:46.735495 6 log.go:172] (0xc00214e790) (0xc001ef7900) Stream removed, broadcasting: 3 I0519 11:52:46.735508 6 log.go:172] (0xc00214e790) (0xc001f03400) Stream removed, broadcasting: 5 May 19 11:52:46.735: INFO: Exec stderr: "" May 19 11:52:46.735: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:46.735: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:46.778258 6 log.go:172] (0xc001c9a0b0) (0xc001b8e5a0) Create stream I0519 11:52:46.778294 6 log.go:172] (0xc001c9a0b0) (0xc001b8e5a0) Stream added, broadcasting: 1 I0519 11:52:46.780578 6 log.go:172] (0xc001c9a0b0) Reply frame received for 1 I0519 11:52:46.780654 6 log.go:172] (0xc001c9a0b0) (0xc001b8e640) Create stream I0519 11:52:46.780679 6 log.go:172] (0xc001c9a0b0) (0xc001b8e640) Stream added, broadcasting: 3 I0519 11:52:46.781805 6 log.go:172] (0xc001c9a0b0) Reply frame received for 3 I0519 11:52:46.781841 6 log.go:172] (0xc001c9a0b0) (0xc001ef79a0) Create stream I0519 11:52:46.781854 6 log.go:172] (0xc001c9a0b0) (0xc001ef79a0) Stream added, broadcasting: 5 I0519 11:52:46.782740 6 log.go:172] (0xc001c9a0b0) Reply frame received for 5 I0519 11:52:46.858200 6 log.go:172] (0xc001c9a0b0) Data frame received for 5 I0519 11:52:46.858261 6 log.go:172] (0xc001ef79a0) (5) Data frame handling I0519 11:52:46.858307 6 log.go:172] (0xc001c9a0b0) Data frame received for 3 I0519 11:52:46.858336 6 log.go:172] (0xc001b8e640) (3) Data frame handling I0519 11:52:46.858362 6 log.go:172] (0xc001b8e640) (3) Data frame sent I0519 11:52:46.858376 6 log.go:172] (0xc001c9a0b0) Data frame received for 3 I0519 11:52:46.858385 6 log.go:172] (0xc001b8e640) (3) Data frame handling I0519 11:52:46.859789 6 log.go:172] (0xc001c9a0b0) Data frame received for 1 I0519 11:52:46.859817 6 log.go:172] (0xc001b8e5a0) (1) Data frame handling I0519 11:52:46.859833 6 log.go:172] (0xc001b8e5a0) (1) Data frame sent I0519 11:52:46.859853 6 log.go:172] (0xc001c9a0b0) (0xc001b8e5a0) Stream removed, broadcasting: 1 I0519 11:52:46.859880 6 log.go:172] (0xc001c9a0b0) Go away received I0519 11:52:46.860004 6 log.go:172] (0xc001c9a0b0) (0xc001b8e5a0) Stream removed, broadcasting: 1 I0519 11:52:46.860030 6 log.go:172] (0xc001c9a0b0) (0xc001b8e640) Stream removed, broadcasting: 3 I0519 11:52:46.860046 6 log.go:172] (0xc001c9a0b0) (0xc001ef79a0) Stream removed, broadcasting: 5 May 19 11:52:46.860: INFO: Exec stderr: "" May 19 11:52:46.860: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:46.860: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:46.883969 6 log.go:172] (0xc0025e62c0) (0xc001ef7d60) Create stream I0519 11:52:46.883993 6 log.go:172] (0xc0025e62c0) (0xc001ef7d60) Stream added, broadcasting: 1 I0519 11:52:46.886117 6 log.go:172] (0xc0025e62c0) Reply frame received for 1 I0519 11:52:46.886150 6 log.go:172] (0xc0025e62c0) (0xc001ef7e00) Create stream I0519 11:52:46.886162 6 log.go:172] (0xc0025e62c0) (0xc001ef7e00) Stream added, broadcasting: 3 I0519 11:52:46.886991 6 log.go:172] (0xc0025e62c0) Reply frame received for 3 I0519 11:52:46.887039 6 log.go:172] (0xc0025e62c0) (0xc000f274a0) Create stream I0519 11:52:46.887057 6 log.go:172] (0xc0025e62c0) (0xc000f274a0) Stream added, broadcasting: 5 I0519 11:52:46.888005 6 log.go:172] (0xc0025e62c0) Reply frame received for 5 I0519 11:52:46.939896 6 log.go:172] (0xc0025e62c0) Data frame received for 5 I0519 11:52:46.939931 6 log.go:172] (0xc000f274a0) (5) Data frame handling I0519 11:52:46.939953 6 log.go:172] (0xc0025e62c0) Data frame received for 3 I0519 11:52:46.939971 6 log.go:172] (0xc001ef7e00) (3) Data frame handling I0519 11:52:46.939981 6 log.go:172] (0xc001ef7e00) (3) Data frame sent I0519 11:52:46.939989 6 log.go:172] (0xc0025e62c0) Data frame received for 3 I0519 11:52:46.939995 6 log.go:172] (0xc001ef7e00) (3) Data frame handling I0519 11:52:46.941673 6 log.go:172] (0xc0025e62c0) Data frame received for 1 I0519 11:52:46.941689 6 log.go:172] (0xc001ef7d60) (1) Data frame handling I0519 11:52:46.941697 6 log.go:172] (0xc001ef7d60) (1) Data frame sent I0519 11:52:46.941717 6 log.go:172] (0xc0025e62c0) (0xc001ef7d60) Stream removed, broadcasting: 1 I0519 11:52:46.941750 6 log.go:172] (0xc0025e62c0) Go away received I0519 11:52:46.941857 6 log.go:172] (0xc0025e62c0) (0xc001ef7d60) Stream removed, broadcasting: 1 I0519 11:52:46.941883 6 log.go:172] (0xc0025e62c0) (0xc001ef7e00) Stream removed, broadcasting: 3 I0519 11:52:46.941906 6 log.go:172] (0xc0025e62c0) (0xc000f274a0) Stream removed, broadcasting: 5 May 19 11:52:46.941: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 19 11:52:46.941: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:46.942: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:46.972424 6 log.go:172] (0xc000deb6b0) (0xc000f27860) Create stream I0519 11:52:46.972459 6 log.go:172] (0xc000deb6b0) (0xc000f27860) Stream added, broadcasting: 1 I0519 11:52:46.975803 6 log.go:172] (0xc000deb6b0) Reply frame received for 1 I0519 11:52:46.975847 6 log.go:172] (0xc000deb6b0) (0xc001f034a0) Create stream I0519 11:52:46.975862 6 log.go:172] (0xc000deb6b0) (0xc001f034a0) Stream added, broadcasting: 3 I0519 11:52:46.976667 6 log.go:172] (0xc000deb6b0) Reply frame received for 3 I0519 11:52:46.976711 6 log.go:172] (0xc000deb6b0) (0xc0020a63c0) Create stream I0519 11:52:46.976726 6 log.go:172] (0xc000deb6b0) (0xc0020a63c0) Stream added, broadcasting: 5 I0519 11:52:46.978230 6 log.go:172] (0xc000deb6b0) Reply frame received for 5 I0519 11:52:47.049911 6 log.go:172] (0xc000deb6b0) Data frame received for 5 I0519 11:52:47.049955 6 log.go:172] (0xc0020a63c0) (5) Data frame handling I0519 11:52:47.050022 6 log.go:172] (0xc000deb6b0) Data frame received for 3 I0519 11:52:47.050072 6 log.go:172] (0xc001f034a0) (3) Data frame handling I0519 11:52:47.050101 6 log.go:172] (0xc001f034a0) (3) Data frame sent I0519 11:52:47.050117 6 log.go:172] (0xc000deb6b0) Data frame received for 3 I0519 11:52:47.050129 6 log.go:172] (0xc001f034a0) (3) Data frame handling I0519 11:52:47.051182 6 log.go:172] (0xc000deb6b0) Data frame received for 1 I0519 11:52:47.051199 6 log.go:172] (0xc000f27860) (1) Data frame handling I0519 11:52:47.051209 6 log.go:172] (0xc000f27860) (1) Data frame sent I0519 11:52:47.051220 6 log.go:172] (0xc000deb6b0) (0xc000f27860) Stream removed, broadcasting: 1 I0519 11:52:47.051231 6 log.go:172] (0xc000deb6b0) Go away received I0519 11:52:47.051463 6 log.go:172] (0xc000deb6b0) (0xc000f27860) Stream removed, broadcasting: 1 I0519 11:52:47.051484 6 log.go:172] (0xc000deb6b0) (0xc001f034a0) Stream removed, broadcasting: 3 I0519 11:52:47.051503 6 log.go:172] (0xc000deb6b0) (0xc0020a63c0) Stream removed, broadcasting: 5 May 19 11:52:47.051: INFO: Exec stderr: "" May 19 11:52:47.051: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:47.051: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:47.110325 6 log.go:172] (0xc0025e6790) (0xc0021201e0) Create stream I0519 11:52:47.110360 6 log.go:172] (0xc0025e6790) (0xc0021201e0) Stream added, broadcasting: 1 I0519 11:52:47.112436 6 log.go:172] (0xc0025e6790) Reply frame received for 1 I0519 11:52:47.112471 6 log.go:172] (0xc0025e6790) (0xc000f279a0) Create stream I0519 11:52:47.112485 6 log.go:172] (0xc0025e6790) (0xc000f279a0) Stream added, broadcasting: 3 I0519 11:52:47.113587 6 log.go:172] (0xc0025e6790) Reply frame received for 3 I0519 11:52:47.113617 6 log.go:172] (0xc0025e6790) (0xc0021203c0) Create stream I0519 11:52:47.113626 6 log.go:172] (0xc0025e6790) (0xc0021203c0) Stream added, broadcasting: 5 I0519 11:52:47.114669 6 log.go:172] (0xc0025e6790) Reply frame received for 5 I0519 11:52:47.172656 6 log.go:172] (0xc0025e6790) Data frame received for 5 I0519 11:52:47.172711 6 log.go:172] (0xc0021203c0) (5) Data frame handling I0519 11:52:47.172769 6 log.go:172] (0xc0025e6790) Data frame received for 3 I0519 11:52:47.172794 6 log.go:172] (0xc000f279a0) (3) Data frame handling I0519 11:52:47.172823 6 log.go:172] (0xc000f279a0) (3) Data frame sent I0519 11:52:47.172844 6 log.go:172] (0xc0025e6790) Data frame received for 3 I0519 11:52:47.172855 6 log.go:172] (0xc000f279a0) (3) Data frame handling I0519 11:52:47.174743 6 log.go:172] (0xc0025e6790) Data frame received for 1 I0519 11:52:47.174770 6 log.go:172] (0xc0021201e0) (1) Data frame handling I0519 11:52:47.174786 6 log.go:172] (0xc0021201e0) (1) Data frame sent I0519 11:52:47.174849 6 log.go:172] (0xc0025e6790) (0xc0021201e0) Stream removed, broadcasting: 1 I0519 11:52:47.174965 6 log.go:172] (0xc0025e6790) (0xc0021201e0) Stream removed, broadcasting: 1 I0519 11:52:47.174982 6 log.go:172] (0xc0025e6790) (0xc000f279a0) Stream removed, broadcasting: 3 I0519 11:52:47.175133 6 log.go:172] (0xc0025e6790) (0xc0021203c0) Stream removed, broadcasting: 5 May 19 11:52:47.175: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 19 11:52:47.175: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:47.175: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:47.177554 6 log.go:172] (0xc0025e6790) Go away received I0519 11:52:47.206399 6 log.go:172] (0xc000debb80) (0xc000f27e00) Create stream I0519 11:52:47.206426 6 log.go:172] (0xc000debb80) (0xc000f27e00) Stream added, broadcasting: 1 I0519 11:52:47.208695 6 log.go:172] (0xc000debb80) Reply frame received for 1 I0519 11:52:47.208724 6 log.go:172] (0xc000debb80) (0xc001f03540) Create stream I0519 11:52:47.208732 6 log.go:172] (0xc000debb80) (0xc001f03540) Stream added, broadcasting: 3 I0519 11:52:47.210347 6 log.go:172] (0xc000debb80) Reply frame received for 3 I0519 11:52:47.210411 6 log.go:172] (0xc000debb80) (0xc0020a6460) Create stream I0519 11:52:47.210427 6 log.go:172] (0xc000debb80) (0xc0020a6460) Stream added, broadcasting: 5 I0519 11:52:47.211382 6 log.go:172] (0xc000debb80) Reply frame received for 5 I0519 11:52:47.264840 6 log.go:172] (0xc000debb80) Data frame received for 5 I0519 11:52:47.264863 6 log.go:172] (0xc0020a6460) (5) Data frame handling I0519 11:52:47.264891 6 log.go:172] (0xc000debb80) Data frame received for 3 I0519 11:52:47.264915 6 log.go:172] (0xc001f03540) (3) Data frame handling I0519 11:52:47.264942 6 log.go:172] (0xc001f03540) (3) Data frame sent I0519 11:52:47.264961 6 log.go:172] (0xc000debb80) Data frame received for 3 I0519 11:52:47.264982 6 log.go:172] (0xc001f03540) (3) Data frame handling I0519 11:52:47.267074 6 log.go:172] (0xc000debb80) Data frame received for 1 I0519 11:52:47.267108 6 log.go:172] (0xc000f27e00) (1) Data frame handling I0519 11:52:47.267135 6 log.go:172] (0xc000f27e00) (1) Data frame sent I0519 11:52:47.267170 6 log.go:172] (0xc000debb80) (0xc000f27e00) Stream removed, broadcasting: 1 I0519 11:52:47.267208 6 log.go:172] (0xc000debb80) Go away received I0519 11:52:47.267269 6 log.go:172] (0xc000debb80) (0xc000f27e00) Stream removed, broadcasting: 1 I0519 11:52:47.267284 6 log.go:172] (0xc000debb80) (0xc001f03540) Stream removed, broadcasting: 3 I0519 11:52:47.267293 6 log.go:172] (0xc000debb80) (0xc0020a6460) Stream removed, broadcasting: 5 May 19 11:52:47.267: INFO: Exec stderr: "" May 19 11:52:47.267: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:47.267: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:47.294463 6 log.go:172] (0xc00214edc0) (0xc0020a6820) Create stream I0519 11:52:47.294487 6 log.go:172] (0xc00214edc0) (0xc0020a6820) Stream added, broadcasting: 1 I0519 11:52:47.297955 6 log.go:172] (0xc00214edc0) Reply frame received for 1 I0519 11:52:47.298000 6 log.go:172] (0xc00214edc0) (0xc001b8e6e0) Create stream I0519 11:52:47.298015 6 log.go:172] (0xc00214edc0) (0xc001b8e6e0) Stream added, broadcasting: 3 I0519 11:52:47.299000 6 log.go:172] (0xc00214edc0) Reply frame received for 3 I0519 11:52:47.299035 6 log.go:172] (0xc00214edc0) (0xc001f035e0) Create stream I0519 11:52:47.299046 6 log.go:172] (0xc00214edc0) (0xc001f035e0) Stream added, broadcasting: 5 I0519 11:52:47.299933 6 log.go:172] (0xc00214edc0) Reply frame received for 5 I0519 11:52:47.424080 6 log.go:172] (0xc00214edc0) Data frame received for 5 I0519 11:52:47.424107 6 log.go:172] (0xc001f035e0) (5) Data frame handling I0519 11:52:47.424134 6 log.go:172] (0xc00214edc0) Data frame received for 3 I0519 11:52:47.424150 6 log.go:172] (0xc001b8e6e0) (3) Data frame handling I0519 11:52:47.424164 6 log.go:172] (0xc001b8e6e0) (3) Data frame sent I0519 11:52:47.424189 6 log.go:172] (0xc00214edc0) Data frame received for 3 I0519 11:52:47.424204 6 log.go:172] (0xc001b8e6e0) (3) Data frame handling I0519 11:52:47.425584 6 log.go:172] (0xc00214edc0) Data frame received for 1 I0519 11:52:47.425620 6 log.go:172] (0xc0020a6820) (1) Data frame handling I0519 11:52:47.425638 6 log.go:172] (0xc0020a6820) (1) Data frame sent I0519 11:52:47.425655 6 log.go:172] (0xc00214edc0) (0xc0020a6820) Stream removed, broadcasting: 1 I0519 11:52:47.425677 6 log.go:172] (0xc00214edc0) Go away received I0519 11:52:47.425822 6 log.go:172] (0xc00214edc0) (0xc0020a6820) Stream removed, broadcasting: 1 I0519 11:52:47.425838 6 log.go:172] (0xc00214edc0) (0xc001b8e6e0) Stream removed, broadcasting: 3 I0519 11:52:47.425844 6 log.go:172] (0xc00214edc0) (0xc001f035e0) Stream removed, broadcasting: 5 May 19 11:52:47.425: INFO: Exec stderr: "" May 19 11:52:47.425: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:47.425: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:47.457522 6 log.go:172] (0xc0025e6c60) (0xc0021208c0) Create stream I0519 11:52:47.457566 6 log.go:172] (0xc0025e6c60) (0xc0021208c0) Stream added, broadcasting: 1 I0519 11:52:47.473449 6 log.go:172] (0xc0025e6c60) Reply frame received for 1 I0519 11:52:47.473491 6 log.go:172] (0xc0025e6c60) (0xc00185c000) Create stream I0519 11:52:47.473502 6 log.go:172] (0xc0025e6c60) (0xc00185c000) Stream added, broadcasting: 3 I0519 11:52:47.474209 6 log.go:172] (0xc0025e6c60) Reply frame received for 3 I0519 11:52:47.474248 6 log.go:172] (0xc0025e6c60) (0xc001a1e0a0) Create stream I0519 11:52:47.474259 6 log.go:172] (0xc0025e6c60) (0xc001a1e0a0) Stream added, broadcasting: 5 I0519 11:52:47.474768 6 log.go:172] (0xc0025e6c60) Reply frame received for 5 I0519 11:52:47.537736 6 log.go:172] (0xc0025e6c60) Data frame received for 5 I0519 11:52:47.537788 6 log.go:172] (0xc001a1e0a0) (5) Data frame handling I0519 11:52:47.537837 6 log.go:172] (0xc0025e6c60) Data frame received for 3 I0519 11:52:47.537864 6 log.go:172] (0xc00185c000) (3) Data frame handling I0519 11:52:47.537890 6 log.go:172] (0xc00185c000) (3) Data frame sent I0519 11:52:47.537913 6 log.go:172] (0xc0025e6c60) Data frame received for 3 I0519 11:52:47.537933 6 log.go:172] (0xc00185c000) (3) Data frame handling I0519 11:52:47.538989 6 log.go:172] (0xc0025e6c60) Data frame received for 1 I0519 11:52:47.539018 6 log.go:172] (0xc0021208c0) (1) Data frame handling I0519 11:52:47.539039 6 log.go:172] (0xc0021208c0) (1) Data frame sent I0519 11:52:47.539052 6 log.go:172] (0xc0025e6c60) (0xc0021208c0) Stream removed, broadcasting: 1 I0519 11:52:47.539069 6 log.go:172] (0xc0025e6c60) Go away received I0519 11:52:47.539222 6 log.go:172] (0xc0025e6c60) (0xc0021208c0) Stream removed, broadcasting: 1 I0519 11:52:47.539247 6 log.go:172] (0xc0025e6c60) (0xc00185c000) Stream removed, broadcasting: 3 I0519 11:52:47.539259 6 log.go:172] (0xc0025e6c60) (0xc001a1e0a0) Stream removed, broadcasting: 5 May 19 11:52:47.539: INFO: Exec stderr: "" May 19 11:52:47.539: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kncvf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 11:52:47.539: INFO: >>> kubeConfig: /root/.kube/config I0519 11:52:47.566557 6 log.go:172] (0xc000deb550) (0xc00185c320) Create stream I0519 11:52:47.566578 6 log.go:172] (0xc000deb550) (0xc00185c320) Stream added, broadcasting: 1 I0519 11:52:47.567978 6 log.go:172] (0xc000deb550) Reply frame received for 1 I0519 11:52:47.568028 6 log.go:172] (0xc000deb550) (0xc001a1e140) Create stream I0519 11:52:47.568056 6 log.go:172] (0xc000deb550) (0xc001a1e140) Stream added, broadcasting: 3 I0519 11:52:47.569068 6 log.go:172] (0xc000deb550) Reply frame received for 3 I0519 11:52:47.569247 6 log.go:172] (0xc000deb550) (0xc00114e000) Create stream I0519 11:52:47.569272 6 log.go:172] (0xc000deb550) (0xc00114e000) Stream added, broadcasting: 5 I0519 11:52:47.570096 6 log.go:172] (0xc000deb550) Reply frame received for 5 I0519 11:52:47.624540 6 log.go:172] (0xc000deb550) Data frame received for 3 I0519 11:52:47.624563 6 log.go:172] (0xc001a1e140) (3) Data frame handling I0519 11:52:47.624597 6 log.go:172] (0xc000deb550) Data frame received for 5 I0519 11:52:47.624650 6 log.go:172] (0xc00114e000) (5) Data frame handling I0519 11:52:47.624690 6 log.go:172] (0xc001a1e140) (3) Data frame sent I0519 11:52:47.624709 6 log.go:172] (0xc000deb550) Data frame received for 3 I0519 11:52:47.624720 6 log.go:172] (0xc001a1e140) (3) Data frame handling I0519 11:52:47.625803 6 log.go:172] (0xc000deb550) Data frame received for 1 I0519 11:52:47.625819 6 log.go:172] (0xc00185c320) (1) Data frame handling I0519 11:52:47.625848 6 log.go:172] (0xc00185c320) (1) Data frame sent I0519 11:52:47.625865 6 log.go:172] (0xc000deb550) (0xc00185c320) Stream removed, broadcasting: 1 I0519 11:52:47.625880 6 log.go:172] (0xc000deb550) Go away received I0519 11:52:47.626029 6 log.go:172] (0xc000deb550) (0xc00185c320) Stream removed, broadcasting: 1 I0519 11:52:47.626056 6 log.go:172] (0xc000deb550) (0xc001a1e140) Stream removed, broadcasting: 3 I0519 11:52:47.626075 6 log.go:172] (0xc000deb550) (0xc00114e000) Stream removed, broadcasting: 5 May 19 11:52:47.626: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:52:47.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-kncvf" for this suite. May 19 11:53:33.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:53:33.755: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-kncvf, resource: bindings, ignored listing per whitelist May 19 11:53:33.850: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-kncvf deletion completed in 46.220675067s • [SLOW TEST:59.573 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:53:33.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 19 11:53:33.997: INFO: Waiting up to 5m0s for pod "var-expansion-5dcb3ce8-99c7-11ea-abcb-0242ac110018" in namespace "e2e-tests-var-expansion-zdq6m" to be "success or failure" May 19 11:53:34.007: INFO: Pod "var-expansion-5dcb3ce8-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.298439ms May 19 11:53:36.011: INFO: Pod "var-expansion-5dcb3ce8-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014392175s May 19 11:53:38.015: INFO: Pod "var-expansion-5dcb3ce8-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018029166s May 19 11:53:40.019: INFO: Pod "var-expansion-5dcb3ce8-99c7-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022553039s STEP: Saw pod success May 19 11:53:40.019: INFO: Pod "var-expansion-5dcb3ce8-99c7-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:53:40.023: INFO: Trying to get logs from node hunter-worker pod var-expansion-5dcb3ce8-99c7-11ea-abcb-0242ac110018 container dapi-container: STEP: delete the pod May 19 11:53:40.043: INFO: Waiting for pod var-expansion-5dcb3ce8-99c7-11ea-abcb-0242ac110018 to disappear May 19 11:53:40.047: INFO: Pod var-expansion-5dcb3ce8-99c7-11ea-abcb-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:53:40.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-zdq6m" for this suite. May 19 11:53:46.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:53:46.118: INFO: namespace: e2e-tests-var-expansion-zdq6m, resource: bindings, ignored listing per whitelist May 19 11:53:46.145: INFO: namespace e2e-tests-var-expansion-zdq6m deletion completed in 6.09433165s • [SLOW TEST:12.295 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:53:46.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 19 11:53:46.248: INFO: Waiting up to 5m0s for pod "pod-6520edf8-99c7-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-fzxlv" to be "success or failure" May 19 11:53:46.270: INFO: Pod "pod-6520edf8-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.198173ms May 19 11:53:48.324: INFO: Pod "pod-6520edf8-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076187527s May 19 11:53:50.328: INFO: Pod "pod-6520edf8-99c7-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080389502s STEP: Saw pod success May 19 11:53:50.328: INFO: Pod "pod-6520edf8-99c7-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:53:50.330: INFO: Trying to get logs from node hunter-worker2 pod pod-6520edf8-99c7-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 11:53:50.476: INFO: Waiting for pod pod-6520edf8-99c7-11ea-abcb-0242ac110018 to disappear May 19 11:53:50.558: INFO: Pod pod-6520edf8-99c7-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:53:50.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fzxlv" for this suite. May 19 11:53:56.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:53:56.626: INFO: namespace: e2e-tests-emptydir-fzxlv, resource: bindings, ignored listing per whitelist May 19 11:53:56.647: INFO: namespace e2e-tests-emptydir-fzxlv deletion completed in 6.085619788s • [SLOW TEST:10.502 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:53:56.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:53:56.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b76b0c4-99c7-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-wdnhq" to be "success or failure" May 19 11:53:56.912: INFO: Pod "downwardapi-volume-6b76b0c4-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.612932ms May 19 11:53:58.916: INFO: Pod "downwardapi-volume-6b76b0c4-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008814337s May 19 11:54:00.919: INFO: Pod "downwardapi-volume-6b76b0c4-99c7-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.012178532s May 19 11:54:02.924: INFO: Pod "downwardapi-volume-6b76b0c4-99c7-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016788023s STEP: Saw pod success May 19 11:54:02.924: INFO: Pod "downwardapi-volume-6b76b0c4-99c7-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:54:02.927: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6b76b0c4-99c7-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:54:02.949: INFO: Waiting for pod downwardapi-volume-6b76b0c4-99c7-11ea-abcb-0242ac110018 to disappear May 19 11:54:02.954: INFO: Pod downwardapi-volume-6b76b0c4-99c7-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:54:02.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wdnhq" for this suite. May 19 11:54:08.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:54:09.039: INFO: namespace: e2e-tests-downward-api-wdnhq, resource: bindings, ignored listing per whitelist May 19 11:54:09.070: INFO: namespace e2e-tests-downward-api-wdnhq deletion completed in 6.112884276s • [SLOW TEST:12.423 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:54:09.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 19 11:54:23.734: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 11:54:23.954: INFO: Pod pod-with-poststart-http-hook still exists May 19 11:54:25.954: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 11:54:25.995: INFO: Pod pod-with-poststart-http-hook still exists May 19 11:54:27.954: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 11:54:27.958: INFO: Pod pod-with-poststart-http-hook still exists May 19 11:54:29.954: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 11:54:29.959: INFO: Pod pod-with-poststart-http-hook still exists May 19 11:54:31.954: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 11:54:31.959: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:54:31.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8wtvs" for this suite. May 19 11:54:53.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:54:54.038: INFO: namespace: e2e-tests-container-lifecycle-hook-8wtvs, resource: bindings, ignored listing per whitelist May 19 11:54:54.054: INFO: namespace e2e-tests-container-lifecycle-hook-8wtvs deletion completed in 22.091249028s • [SLOW TEST:44.983 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:54:54.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-9jcqg I0519 11:54:54.199872 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-9jcqg, replica count: 1 I0519 11:54:55.250291 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 11:54:56.250515 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 11:54:57.250706 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 11:54:58.250962 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 11:54:59.251100 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 11:54:59.555: INFO: Created: latency-svc-dvbxw May 19 11:54:59.664: INFO: Got endpoints: latency-svc-dvbxw [313.288511ms] May 19 11:54:59.944: INFO: Created: latency-svc-t4rv9 May 19 11:54:59.979: INFO: Got endpoints: latency-svc-t4rv9 [314.531376ms] May 19 11:55:00.095: INFO: Created: latency-svc-j8ncm May 19 11:55:00.117: INFO: Got endpoints: latency-svc-j8ncm [453.173495ms] May 19 11:55:00.159: INFO: Created: latency-svc-zgw2x May 19 11:55:00.474: INFO: Got endpoints: latency-svc-zgw2x [810.202616ms] May 19 11:55:00.702: INFO: Created: latency-svc-xxx76 May 19 11:55:00.716: INFO: Got endpoints: latency-svc-xxx76 [1.051486805s] May 19 11:55:00.772: INFO: Created: latency-svc-bg558 May 19 11:55:00.782: INFO: Got endpoints: latency-svc-bg558 [1.117844636s] May 19 11:55:01.446: INFO: Created: latency-svc-5cw26 May 19 11:55:01.449: INFO: Got endpoints: latency-svc-5cw26 [1.784181684s] May 19 11:55:02.123: INFO: Created: latency-svc-l2brz May 19 11:55:02.157: INFO: Got endpoints: latency-svc-l2brz [2.492744434s] May 19 11:55:02.170: INFO: Created: latency-svc-h8v5t May 19 11:55:02.185: INFO: Got endpoints: latency-svc-h8v5t [2.520503904s] May 19 11:55:02.214: INFO: Created: latency-svc-bqf9z May 19 11:55:02.244: INFO: Got endpoints: latency-svc-bqf9z [2.579249688s] May 19 11:55:02.302: INFO: Created: latency-svc-rvdkf May 19 11:55:02.304: INFO: Got endpoints: latency-svc-rvdkf [2.639642577s] May 19 11:55:02.835: INFO: Created: latency-svc-pcdwb May 19 11:55:02.906: INFO: Got endpoints: latency-svc-pcdwb [3.241603065s] May 19 11:55:03.354: INFO: Created: latency-svc-cjtmg May 19 11:55:03.406: INFO: Got endpoints: latency-svc-cjtmg [3.741271988s] May 19 11:55:03.413: INFO: Created: latency-svc-shsdf May 19 11:55:03.419: INFO: Got endpoints: latency-svc-shsdf [3.754983062s] May 19 11:55:03.959: INFO: Created: latency-svc-sqd4t May 19 11:55:03.965: INFO: Got endpoints: latency-svc-sqd4t [4.300468716s] May 19 11:55:04.477: INFO: Created: latency-svc-w6bc5 May 19 11:55:04.480: INFO: Got endpoints: latency-svc-w6bc5 [4.815861141s] May 19 11:55:04.546: INFO: Created: latency-svc-clq8t May 19 11:55:04.576: INFO: Got endpoints: latency-svc-clq8t [4.597359047s] May 19 11:55:04.576: INFO: Created: latency-svc-78sbg May 19 11:55:04.594: INFO: Got endpoints: latency-svc-78sbg [4.476227701s] May 19 11:55:05.247: INFO: Created: latency-svc-7l2nd May 19 11:55:05.291: INFO: Got endpoints: latency-svc-7l2nd [4.816555627s] May 19 11:55:05.428: INFO: Created: latency-svc-vvbvz May 19 11:55:05.431: INFO: Got endpoints: latency-svc-vvbvz [4.715172294s] May 19 11:55:06.111: INFO: Created: latency-svc-m8ntt May 19 11:55:06.385: INFO: Got endpoints: latency-svc-m8ntt [5.602714623s] May 19 11:55:06.787: INFO: Created: latency-svc-gl5k2 May 19 11:55:06.791: INFO: Got endpoints: latency-svc-gl5k2 [5.342196931s] May 19 11:55:07.487: INFO: Created: latency-svc-kqbpk May 19 11:55:07.562: INFO: Got endpoints: latency-svc-kqbpk [5.404876089s] May 19 11:55:08.078: INFO: Created: latency-svc-6xj75 May 19 11:55:08.114: INFO: Got endpoints: latency-svc-6xj75 [5.929368452s] May 19 11:55:08.146: INFO: Created: latency-svc-9m5rd May 19 11:55:08.283: INFO: Got endpoints: latency-svc-9m5rd [6.038975172s] May 19 11:55:08.735: INFO: Created: latency-svc-dnz5t May 19 11:55:08.785: INFO: Got endpoints: latency-svc-dnz5t [6.481174842s] May 19 11:55:09.101: INFO: Created: latency-svc-mtzm5 May 19 11:55:09.127: INFO: Got endpoints: latency-svc-mtzm5 [6.221190026s] May 19 11:55:09.658: INFO: Created: latency-svc-drkbl May 19 11:55:09.666: INFO: Got endpoints: latency-svc-drkbl [6.259763897s] May 19 11:55:10.192: INFO: Created: latency-svc-7mgf7 May 19 11:55:10.222: INFO: Created: latency-svc-jbkt2 May 19 11:55:10.249: INFO: Got endpoints: latency-svc-7mgf7 [6.829139937s] May 19 11:55:10.250: INFO: Created: latency-svc-s8fjc May 19 11:55:10.433: INFO: Got endpoints: latency-svc-s8fjc [5.952540384s] May 19 11:55:10.774: INFO: Got endpoints: latency-svc-jbkt2 [6.809000976s] May 19 11:55:10.851: INFO: Created: latency-svc-mmll8 May 19 11:55:10.894: INFO: Got endpoints: latency-svc-mmll8 [6.317647038s] May 19 11:55:11.408: INFO: Created: latency-svc-mk7rc May 19 11:55:11.416: INFO: Got endpoints: latency-svc-mk7rc [6.82253152s] May 19 11:55:11.444: INFO: Created: latency-svc-4mr26 May 19 11:55:11.452: INFO: Got endpoints: latency-svc-4mr26 [6.161115269s] May 19 11:55:12.036: INFO: Created: latency-svc-fr8cv May 19 11:55:12.064: INFO: Got endpoints: latency-svc-fr8cv [6.632847768s] May 19 11:55:12.582: INFO: Created: latency-svc-dwrqq May 19 11:55:12.643: INFO: Got endpoints: latency-svc-dwrqq [6.257731437s] May 19 11:55:13.122: INFO: Created: latency-svc-tf2sp May 19 11:55:13.135: INFO: Got endpoints: latency-svc-tf2sp [6.344021164s] May 19 11:55:13.194: INFO: Created: latency-svc-7s66h May 19 11:55:13.201: INFO: Got endpoints: latency-svc-7s66h [5.638674718s] May 19 11:55:13.771: INFO: Created: latency-svc-p6jgd May 19 11:55:13.848: INFO: Got endpoints: latency-svc-p6jgd [5.733261566s] May 19 11:55:13.891: INFO: Created: latency-svc-5nmmk May 19 11:55:13.914: INFO: Got endpoints: latency-svc-5nmmk [5.631486196s] May 19 11:55:14.063: INFO: Created: latency-svc-xl8gn May 19 11:55:14.119: INFO: Got endpoints: latency-svc-xl8gn [5.333645609s] May 19 11:55:14.225: INFO: Created: latency-svc-xmfzn May 19 11:55:14.225: INFO: Got endpoints: latency-svc-xmfzn [5.098030134s] May 19 11:55:14.828: INFO: Created: latency-svc-vb5s8 May 19 11:55:14.831: INFO: Got endpoints: latency-svc-vb5s8 [5.165611495s] May 19 11:55:15.806: INFO: Created: latency-svc-vg9dv May 19 11:55:15.809: INFO: Got endpoints: latency-svc-vg9dv [5.560534217s] May 19 11:55:16.470: INFO: Created: latency-svc-7kk7s May 19 11:55:16.478: INFO: Got endpoints: latency-svc-7kk7s [6.0450267s] May 19 11:55:17.086: INFO: Created: latency-svc-chgl9 May 19 11:55:17.143: INFO: Got endpoints: latency-svc-chgl9 [6.368731975s] May 19 11:55:17.823: INFO: Created: latency-svc-ls2vp May 19 11:55:17.826: INFO: Got endpoints: latency-svc-ls2vp [6.931707867s] May 19 11:55:18.002: INFO: Created: latency-svc-9mtq7 May 19 11:55:18.005: INFO: Got endpoints: latency-svc-9mtq7 [6.588786063s] May 19 11:55:18.572: INFO: Created: latency-svc-9kb8k May 19 11:55:18.575: INFO: Got endpoints: latency-svc-9kb8k [7.122835427s] May 19 11:55:18.635: INFO: Created: latency-svc-hwr4h May 19 11:55:18.649: INFO: Got endpoints: latency-svc-hwr4h [6.584373732s] May 19 11:55:18.726: INFO: Created: latency-svc-cscf8 May 19 11:55:18.726: INFO: Got endpoints: latency-svc-cscf8 [6.083220669s] May 19 11:55:18.754: INFO: Created: latency-svc-5bm8n May 19 11:55:18.769: INFO: Got endpoints: latency-svc-5bm8n [5.634160755s] May 19 11:55:19.278: INFO: Created: latency-svc-lc9j9 May 19 11:55:19.291: INFO: Got endpoints: latency-svc-lc9j9 [6.090459763s] May 19 11:55:19.316: INFO: Created: latency-svc-972kn May 19 11:55:19.335: INFO: Got endpoints: latency-svc-972kn [5.486915231s] May 19 11:55:19.391: INFO: Created: latency-svc-kj726 May 19 11:55:19.405: INFO: Got endpoints: latency-svc-kj726 [5.490655557s] May 19 11:55:20.008: INFO: Created: latency-svc-jvbvb May 19 11:55:20.030: INFO: Got endpoints: latency-svc-jvbvb [5.911492397s] May 19 11:55:20.075: INFO: Created: latency-svc-glpcz May 19 11:55:20.182: INFO: Got endpoints: latency-svc-glpcz [5.956498088s] May 19 11:55:20.183: INFO: Created: latency-svc-qjppd May 19 11:55:20.202: INFO: Got endpoints: latency-svc-qjppd [5.371181605s] May 19 11:55:20.265: INFO: Created: latency-svc-kxqzp May 19 11:55:20.373: INFO: Got endpoints: latency-svc-kxqzp [4.56347622s] May 19 11:55:20.849: INFO: Created: latency-svc-g5b4b May 19 11:55:20.948: INFO: Created: latency-svc-hndqd May 19 11:55:20.971: INFO: Got endpoints: latency-svc-g5b4b [4.492621252s] May 19 11:55:20.971: INFO: Got endpoints: latency-svc-hndqd [3.828004129s] May 19 11:55:20.997: INFO: Created: latency-svc-8z2mb May 19 11:55:21.012: INFO: Got endpoints: latency-svc-8z2mb [3.186507553s] May 19 11:55:21.643: INFO: Created: latency-svc-gg55c May 19 11:55:21.648: INFO: Got endpoints: latency-svc-gg55c [3.642824267s] May 19 11:55:22.228: INFO: Created: latency-svc-67vnk May 19 11:55:22.259: INFO: Got endpoints: latency-svc-67vnk [3.683431651s] May 19 11:55:22.289: INFO: Created: latency-svc-lsl9n May 19 11:55:22.492: INFO: Got endpoints: latency-svc-lsl9n [3.843663274s] May 19 11:55:22.906: INFO: Created: latency-svc-cqt5j May 19 11:55:22.911: INFO: Got endpoints: latency-svc-cqt5j [4.184969903s] May 19 11:55:22.979: INFO: Created: latency-svc-h5bhd May 19 11:55:23.103: INFO: Got endpoints: latency-svc-h5bhd [4.333896403s] May 19 11:55:23.914: INFO: Created: latency-svc-l4q4k May 19 11:55:23.923: INFO: Got endpoints: latency-svc-l4q4k [4.631494684s] May 19 11:55:24.579: INFO: Created: latency-svc-m58xc May 19 11:55:24.586: INFO: Got endpoints: latency-svc-m58xc [5.250984417s] May 19 11:55:24.614: INFO: Created: latency-svc-nw8gc May 19 11:55:24.631: INFO: Got endpoints: latency-svc-nw8gc [5.226408073s] May 19 11:55:24.653: INFO: Created: latency-svc-jvp7j May 19 11:55:24.668: INFO: Got endpoints: latency-svc-jvp7j [4.637036979s] May 19 11:55:25.131: INFO: Created: latency-svc-vhcd2 May 19 11:55:25.149: INFO: Got endpoints: latency-svc-vhcd2 [4.967117632s] May 19 11:55:25.173: INFO: Created: latency-svc-bqzhh May 19 11:55:25.547: INFO: Got endpoints: latency-svc-bqzhh [5.34444112s] May 19 11:55:26.337: INFO: Created: latency-svc-swg9n May 19 11:55:26.388: INFO: Got endpoints: latency-svc-swg9n [6.014864843s] May 19 11:55:26.643: INFO: Created: latency-svc-c6hth May 19 11:55:26.846: INFO: Got endpoints: latency-svc-c6hth [5.875354432s] May 19 11:55:26.898: INFO: Created: latency-svc-cf2fr May 19 11:55:26.934: INFO: Got endpoints: latency-svc-cf2fr [5.963328734s] May 19 11:55:27.026: INFO: Created: latency-svc-rnn8x May 19 11:55:27.028: INFO: Got endpoints: latency-svc-rnn8x [6.015721407s] May 19 11:55:27.647: INFO: Created: latency-svc-dnwzj May 19 11:55:27.655: INFO: Got endpoints: latency-svc-dnwzj [6.006740392s] May 19 11:55:27.710: INFO: Created: latency-svc-2zhdx May 19 11:55:27.721: INFO: Got endpoints: latency-svc-2zhdx [5.462559998s] May 19 11:55:28.252: INFO: Created: latency-svc-n9jmr May 19 11:55:28.266: INFO: Got endpoints: latency-svc-n9jmr [5.773425264s] May 19 11:55:28.289: INFO: Created: latency-svc-zm9z8 May 19 11:55:28.302: INFO: Got endpoints: latency-svc-zm9z8 [5.391277571s] May 19 11:55:28.846: INFO: Created: latency-svc-57l8z May 19 11:55:28.925: INFO: Got endpoints: latency-svc-57l8z [5.82124639s] May 19 11:55:28.927: INFO: Created: latency-svc-4dd7r May 19 11:55:28.950: INFO: Got endpoints: latency-svc-4dd7r [5.026947171s] May 19 11:55:29.548: INFO: Created: latency-svc-pvw9b May 19 11:55:29.618: INFO: Got endpoints: latency-svc-pvw9b [5.032351311s] May 19 11:55:30.335: INFO: Created: latency-svc-cj28k May 19 11:55:30.346: INFO: Got endpoints: latency-svc-cj28k [5.714649126s] May 19 11:55:30.887: INFO: Created: latency-svc-lbb68 May 19 11:55:30.891: INFO: Got endpoints: latency-svc-lbb68 [6.223755338s] May 19 11:55:30.960: INFO: Created: latency-svc-f5f9b May 19 11:55:31.019: INFO: Created: latency-svc-wv94r May 19 11:55:31.058: INFO: Got endpoints: latency-svc-f5f9b [5.909011429s] May 19 11:55:31.059: INFO: Created: latency-svc-cvbvz May 19 11:55:31.128: INFO: Got endpoints: latency-svc-cvbvz [4.739813035s] May 19 11:55:31.128: INFO: Got endpoints: latency-svc-wv94r [5.580573015s] May 19 11:55:31.211: INFO: Created: latency-svc-qnn5d May 19 11:55:31.559: INFO: Got endpoints: latency-svc-qnn5d [4.712549506s] May 19 11:55:31.563: INFO: Created: latency-svc-tzhd4 May 19 11:55:31.577: INFO: Got endpoints: latency-svc-tzhd4 [4.643196975s] May 19 11:55:32.120: INFO: Created: latency-svc-nww2m May 19 11:55:32.145: INFO: Got endpoints: latency-svc-nww2m [5.117158934s] May 19 11:55:32.794: INFO: Created: latency-svc-q6d7r May 19 11:55:32.822: INFO: Got endpoints: latency-svc-q6d7r [5.167642642s] May 19 11:55:33.375: INFO: Created: latency-svc-vgv2c May 19 11:55:33.403: INFO: Got endpoints: latency-svc-vgv2c [5.681394243s] May 19 11:55:33.433: INFO: Created: latency-svc-pcvwd May 19 11:55:33.474: INFO: Got endpoints: latency-svc-pcvwd [5.208264186s] May 19 11:55:33.489: INFO: Created: latency-svc-n4dxc May 19 11:55:33.506: INFO: Got endpoints: latency-svc-n4dxc [5.20340466s] May 19 11:55:33.544: INFO: Created: latency-svc-2scwd May 19 11:55:33.560: INFO: Got endpoints: latency-svc-2scwd [4.635128234s] May 19 11:55:33.631: INFO: Created: latency-svc-h7g9m May 19 11:55:33.635: INFO: Got endpoints: latency-svc-h7g9m [4.684716082s] May 19 11:55:33.711: INFO: Created: latency-svc-d8fxp May 19 11:55:33.728: INFO: Got endpoints: latency-svc-d8fxp [4.110406322s] May 19 11:55:33.816: INFO: Created: latency-svc-q9zdd May 19 11:55:33.889: INFO: Got endpoints: latency-svc-q9zdd [3.543116134s] May 19 11:55:34.472: INFO: Created: latency-svc-p4426 May 19 11:55:34.484: INFO: Got endpoints: latency-svc-p4426 [3.592699763s] May 19 11:55:34.526: INFO: Created: latency-svc-lxhck May 19 11:55:34.550: INFO: Got endpoints: latency-svc-lxhck [3.492267595s] May 19 11:55:35.059: INFO: Created: latency-svc-597bv May 19 11:55:35.071: INFO: Got endpoints: latency-svc-597bv [3.943222229s] May 19 11:55:35.120: INFO: Created: latency-svc-9xfn5 May 19 11:55:35.132: INFO: Got endpoints: latency-svc-9xfn5 [4.003964024s] May 19 11:55:35.169: INFO: Created: latency-svc-nsd4j May 19 11:55:35.266: INFO: Got endpoints: latency-svc-nsd4j [3.706832329s] May 19 11:55:35.294: INFO: Created: latency-svc-66w2h May 19 11:55:35.330: INFO: Got endpoints: latency-svc-66w2h [3.753124032s] May 19 11:55:35.439: INFO: Created: latency-svc-qz2nt May 19 11:55:35.442: INFO: Got endpoints: latency-svc-qz2nt [3.296984951s] May 19 11:55:35.498: INFO: Created: latency-svc-twg7f May 19 11:55:35.523: INFO: Got endpoints: latency-svc-twg7f [2.700534551s] May 19 11:55:35.589: INFO: Created: latency-svc-frhlv May 19 11:55:35.592: INFO: Got endpoints: latency-svc-frhlv [2.189132326s] May 19 11:55:35.674: INFO: Created: latency-svc-qxxsw May 19 11:55:35.762: INFO: Got endpoints: latency-svc-qxxsw [2.288007056s] May 19 11:55:35.834: INFO: Created: latency-svc-fhkqq May 19 11:55:35.850: INFO: Got endpoints: latency-svc-fhkqq [2.344447093s] May 19 11:55:35.900: INFO: Created: latency-svc-9hp9z May 19 11:55:35.904: INFO: Got endpoints: latency-svc-9hp9z [2.343558943s] May 19 11:55:35.938: INFO: Created: latency-svc-2txgm May 19 11:55:35.959: INFO: Got endpoints: latency-svc-2txgm [2.324780134s] May 19 11:55:35.992: INFO: Created: latency-svc-7f9g8 May 19 11:55:36.092: INFO: Got endpoints: latency-svc-7f9g8 [2.363149931s] May 19 11:55:36.100: INFO: Created: latency-svc-78v7z May 19 11:55:36.151: INFO: Got endpoints: latency-svc-78v7z [2.261875576s] May 19 11:55:36.688: INFO: Created: latency-svc-fzn6c May 19 11:55:36.878: INFO: Got endpoints: latency-svc-fzn6c [2.393397668s] May 19 11:55:36.908: INFO: Created: latency-svc-p774s May 19 11:55:36.942: INFO: Got endpoints: latency-svc-p774s [2.391786954s] May 19 11:55:37.169: INFO: Created: latency-svc-sn5hm May 19 11:55:37.221: INFO: Got endpoints: latency-svc-sn5hm [2.150113238s] May 19 11:55:37.368: INFO: Created: latency-svc-xh5ct May 19 11:55:37.400: INFO: Got endpoints: latency-svc-xh5ct [2.268274279s] May 19 11:55:37.462: INFO: Created: latency-svc-w4mqc May 19 11:55:37.528: INFO: Got endpoints: latency-svc-w4mqc [2.262826743s] May 19 11:55:37.592: INFO: Created: latency-svc-j6fsb May 19 11:55:37.608: INFO: Got endpoints: latency-svc-j6fsb [2.277583804s] May 19 11:55:37.624: INFO: Created: latency-svc-dgmv8 May 19 11:55:37.689: INFO: Got endpoints: latency-svc-dgmv8 [2.2467148s] May 19 11:55:37.757: INFO: Created: latency-svc-8tpc6 May 19 11:55:37.770: INFO: Got endpoints: latency-svc-8tpc6 [2.246869923s] May 19 11:55:37.859: INFO: Created: latency-svc-chgsb May 19 11:55:37.911: INFO: Created: latency-svc-gf582 May 19 11:55:37.911: INFO: Got endpoints: latency-svc-chgsb [2.318843581s] May 19 11:55:37.946: INFO: Got endpoints: latency-svc-gf582 [2.183906519s] May 19 11:55:38.010: INFO: Created: latency-svc-sgmgq May 19 11:55:38.023: INFO: Got endpoints: latency-svc-sgmgq [2.172922665s] May 19 11:55:38.057: INFO: Created: latency-svc-8668q May 19 11:55:38.066: INFO: Got endpoints: latency-svc-8668q [2.162567412s] May 19 11:55:38.086: INFO: Created: latency-svc-6kx4x May 19 11:55:38.206: INFO: Got endpoints: latency-svc-6kx4x [2.246024125s] May 19 11:55:38.207: INFO: Created: latency-svc-qpsmp May 19 11:55:38.217: INFO: Got endpoints: latency-svc-qpsmp [2.125281283s] May 19 11:55:38.259: INFO: Created: latency-svc-f7dts May 19 11:55:38.270: INFO: Got endpoints: latency-svc-f7dts [2.118809976s] May 19 11:55:38.295: INFO: Created: latency-svc-r5pb8 May 19 11:55:38.362: INFO: Got endpoints: latency-svc-r5pb8 [1.484274843s] May 19 11:55:38.387: INFO: Created: latency-svc-96sz7 May 19 11:55:38.397: INFO: Got endpoints: latency-svc-96sz7 [1.454586182s] May 19 11:55:38.419: INFO: Created: latency-svc-kcvmb May 19 11:55:38.458: INFO: Got endpoints: latency-svc-kcvmb [1.2369689s] May 19 11:55:38.559: INFO: Created: latency-svc-f7dwz May 19 11:55:38.589: INFO: Got endpoints: latency-svc-f7dwz [1.189299923s] May 19 11:55:38.609: INFO: Created: latency-svc-xmx8v May 19 11:55:38.634: INFO: Got endpoints: latency-svc-xmx8v [1.105024912s] May 19 11:55:38.721: INFO: Created: latency-svc-d6k45 May 19 11:55:38.728: INFO: Got endpoints: latency-svc-d6k45 [1.119462065s] May 19 11:55:38.759: INFO: Created: latency-svc-dtf7z May 19 11:55:38.776: INFO: Got endpoints: latency-svc-dtf7z [1.087115861s] May 19 11:55:38.795: INFO: Created: latency-svc-4ztml May 19 11:55:38.907: INFO: Got endpoints: latency-svc-4ztml [1.136472293s] May 19 11:55:38.943: INFO: Created: latency-svc-kjvmm May 19 11:55:38.956: INFO: Got endpoints: latency-svc-kjvmm [1.045528172s] May 19 11:55:39.017: INFO: Created: latency-svc-9h8w4 May 19 11:55:39.047: INFO: Got endpoints: latency-svc-9h8w4 [1.100502737s] May 19 11:55:39.087: INFO: Created: latency-svc-74bnp May 19 11:55:39.101: INFO: Got endpoints: latency-svc-74bnp [1.077912s] May 19 11:55:39.188: INFO: Created: latency-svc-tmvth May 19 11:55:39.191: INFO: Got endpoints: latency-svc-tmvth [1.124424209s] May 19 11:55:39.251: INFO: Created: latency-svc-vnsks May 19 11:55:39.270: INFO: Got endpoints: latency-svc-vnsks [1.064476294s] May 19 11:55:39.327: INFO: Created: latency-svc-lxfzk May 19 11:55:39.348: INFO: Got endpoints: latency-svc-lxfzk [1.130652641s] May 19 11:55:39.387: INFO: Created: latency-svc-g9jwq May 19 11:55:39.408: INFO: Got endpoints: latency-svc-g9jwq [1.137956954s] May 19 11:55:39.481: INFO: Created: latency-svc-4vcxq May 19 11:55:39.513: INFO: Got endpoints: latency-svc-4vcxq [1.151116878s] May 19 11:55:39.571: INFO: Created: latency-svc-n2cbq May 19 11:55:39.648: INFO: Got endpoints: latency-svc-n2cbq [1.251321403s] May 19 11:55:39.652: INFO: Created: latency-svc-79bjc May 19 11:55:39.667: INFO: Got endpoints: latency-svc-79bjc [1.208941652s] May 19 11:55:39.711: INFO: Created: latency-svc-zs6ff May 19 11:55:39.813: INFO: Got endpoints: latency-svc-zs6ff [1.223181421s] May 19 11:55:39.875: INFO: Created: latency-svc-hv556 May 19 11:55:39.942: INFO: Got endpoints: latency-svc-hv556 [1.308415735s] May 19 11:55:39.988: INFO: Created: latency-svc-zldmw May 19 11:55:40.015: INFO: Got endpoints: latency-svc-zldmw [1.287857544s] May 19 11:55:40.104: INFO: Created: latency-svc-vn6wr May 19 11:55:40.106: INFO: Got endpoints: latency-svc-vn6wr [1.330103146s] May 19 11:55:40.146: INFO: Created: latency-svc-q5krf May 19 11:55:40.160: INFO: Got endpoints: latency-svc-q5krf [1.252902503s] May 19 11:55:40.180: INFO: Created: latency-svc-rq64f May 19 11:55:40.190: INFO: Got endpoints: latency-svc-rq64f [1.233248071s] May 19 11:55:40.287: INFO: Created: latency-svc-4q4t5 May 19 11:55:40.517: INFO: Got endpoints: latency-svc-4q4t5 [1.470318627s] May 19 11:55:40.551: INFO: Created: latency-svc-44z5z May 19 11:55:40.574: INFO: Got endpoints: latency-svc-44z5z [1.472866934s] May 19 11:55:40.667: INFO: Created: latency-svc-2qcc2 May 19 11:55:40.670: INFO: Got endpoints: latency-svc-2qcc2 [1.478864222s] May 19 11:55:40.750: INFO: Created: latency-svc-r8vkb May 19 11:55:40.810: INFO: Got endpoints: latency-svc-r8vkb [1.540323764s] May 19 11:55:40.821: INFO: Created: latency-svc-dfg7s May 19 11:55:40.833: INFO: Got endpoints: latency-svc-dfg7s [1.485607441s] May 19 11:55:40.851: INFO: Created: latency-svc-xlzvh May 19 11:55:40.863: INFO: Got endpoints: latency-svc-xlzvh [1.454674472s] May 19 11:55:41.110: INFO: Created: latency-svc-44f5v May 19 11:55:41.165: INFO: Created: latency-svc-n6w79 May 19 11:55:41.182: INFO: Got endpoints: latency-svc-n6w79 [1.533686s] May 19 11:55:41.184: INFO: Got endpoints: latency-svc-44f5v [1.67036349s] May 19 11:55:41.202: INFO: Created: latency-svc-5bfvw May 19 11:55:41.205: INFO: Got endpoints: latency-svc-5bfvw [1.537792743s] May 19 11:55:41.273: INFO: Created: latency-svc-jfwfr May 19 11:55:41.301: INFO: Got endpoints: latency-svc-jfwfr [1.48817963s] May 19 11:55:41.320: INFO: Created: latency-svc-kdqxq May 19 11:55:41.332: INFO: Got endpoints: latency-svc-kdqxq [1.390140596s] May 19 11:55:41.352: INFO: Created: latency-svc-vt76d May 19 11:55:41.410: INFO: Got endpoints: latency-svc-vt76d [1.394407392s] May 19 11:55:41.428: INFO: Created: latency-svc-lx9fw May 19 11:55:41.452: INFO: Got endpoints: latency-svc-lx9fw [1.345300358s] May 19 11:55:41.478: INFO: Created: latency-svc-j72gj May 19 11:55:41.489: INFO: Got endpoints: latency-svc-j72gj [1.329497727s] May 19 11:55:41.579: INFO: Created: latency-svc-5wwq6 May 19 11:55:41.579: INFO: Got endpoints: latency-svc-5wwq6 [1.389624459s] May 19 11:55:41.632: INFO: Created: latency-svc-hntmq May 19 11:55:41.646: INFO: Got endpoints: latency-svc-hntmq [1.12816913s] May 19 11:55:41.662: INFO: Created: latency-svc-ckjjk May 19 11:55:41.676: INFO: Got endpoints: latency-svc-ckjjk [1.101537144s] May 19 11:55:41.739: INFO: Created: latency-svc-qf9js May 19 11:55:41.748: INFO: Got endpoints: latency-svc-qf9js [1.078183216s] May 19 11:55:41.766: INFO: Created: latency-svc-74k5r May 19 11:55:41.778: INFO: Got endpoints: latency-svc-74k5r [967.852032ms] May 19 11:55:41.812: INFO: Created: latency-svc-4j85k May 19 11:55:41.827: INFO: Got endpoints: latency-svc-4j85k [993.273742ms] May 19 11:55:41.900: INFO: Created: latency-svc-nm6w2 May 19 11:55:41.903: INFO: Got endpoints: latency-svc-nm6w2 [1.039931601s] May 19 11:55:41.957: INFO: Created: latency-svc-zbwth May 19 11:55:41.971: INFO: Got endpoints: latency-svc-zbwth [789.244511ms] May 19 11:55:42.005: INFO: Created: latency-svc-ntlhb May 19 11:55:42.071: INFO: Got endpoints: latency-svc-ntlhb [887.079993ms] May 19 11:55:42.072: INFO: Created: latency-svc-g2c9c May 19 11:55:42.094: INFO: Got endpoints: latency-svc-g2c9c [888.81431ms] May 19 11:55:42.131: INFO: Created: latency-svc-9xhck May 19 11:55:42.146: INFO: Got endpoints: latency-svc-9xhck [844.915466ms] May 19 11:55:42.200: INFO: Created: latency-svc-vz2hv May 19 11:55:42.212: INFO: Got endpoints: latency-svc-vz2hv [879.954671ms] May 19 11:55:42.232: INFO: Created: latency-svc-nch7k May 19 11:55:42.250: INFO: Got endpoints: latency-svc-nch7k [839.810082ms] May 19 11:55:42.276: INFO: Created: latency-svc-zqjtk May 19 11:55:42.291: INFO: Got endpoints: latency-svc-zqjtk [839.023018ms] May 19 11:55:42.342: INFO: Created: latency-svc-wk62b May 19 11:55:42.363: INFO: Got endpoints: latency-svc-wk62b [874.081171ms] May 19 11:55:42.382: INFO: Created: latency-svc-4h6nx May 19 11:55:42.406: INFO: Got endpoints: latency-svc-4h6nx [826.077113ms] May 19 11:55:42.432: INFO: Created: latency-svc-m46zf May 19 11:55:42.469: INFO: Got endpoints: latency-svc-m46zf [823.28857ms] May 19 11:55:42.479: INFO: Created: latency-svc-8tfcx May 19 11:55:42.490: INFO: Got endpoints: latency-svc-8tfcx [814.710162ms] May 19 11:55:42.510: INFO: Created: latency-svc-xgv9r May 19 11:55:42.532: INFO: Got endpoints: latency-svc-xgv9r [784.108396ms] May 19 11:55:42.568: INFO: Created: latency-svc-88sjx May 19 11:55:42.606: INFO: Got endpoints: latency-svc-88sjx [827.990734ms] May 19 11:55:42.624: INFO: Created: latency-svc-6vcgr May 19 11:55:42.641: INFO: Got endpoints: latency-svc-6vcgr [814.495403ms] May 19 11:55:42.666: INFO: Created: latency-svc-642jm May 19 11:55:42.696: INFO: Got endpoints: latency-svc-642jm [793.003992ms] May 19 11:55:42.756: INFO: Created: latency-svc-8cpqg May 19 11:55:42.759: INFO: Got endpoints: latency-svc-8cpqg [787.851802ms] May 19 11:55:42.790: INFO: Created: latency-svc-jt597 May 19 11:55:42.810: INFO: Got endpoints: latency-svc-jt597 [739.357343ms] May 19 11:55:42.847: INFO: Created: latency-svc-xr4wn May 19 11:55:42.924: INFO: Got endpoints: latency-svc-xr4wn [830.202229ms] May 19 11:55:42.940: INFO: Created: latency-svc-5vx2l May 19 11:55:42.954: INFO: Got endpoints: latency-svc-5vx2l [808.250988ms] May 19 11:55:42.984: INFO: Created: latency-svc-mtcfj May 19 11:55:42.996: INFO: Got endpoints: latency-svc-mtcfj [783.980106ms] May 19 11:55:43.020: INFO: Created: latency-svc-cvcdx May 19 11:55:43.092: INFO: Got endpoints: latency-svc-cvcdx [842.100819ms] May 19 11:55:43.120: INFO: Created: latency-svc-xhphm May 19 11:55:43.130: INFO: Got endpoints: latency-svc-xhphm [839.310519ms] May 19 11:55:43.170: INFO: Created: latency-svc-n2xcm May 19 11:55:43.185: INFO: Got endpoints: latency-svc-n2xcm [821.767391ms] May 19 11:55:43.241: INFO: Created: latency-svc-cdmz7 May 19 11:55:43.250: INFO: Got endpoints: latency-svc-cdmz7 [844.732446ms] May 19 11:55:43.276: INFO: Created: latency-svc-cdh5k May 19 11:55:43.286: INFO: Got endpoints: latency-svc-cdh5k [817.505446ms] May 19 11:55:43.318: INFO: Created: latency-svc-2dl6q May 19 11:55:43.329: INFO: Got endpoints: latency-svc-2dl6q [838.53642ms] May 19 11:55:43.329: INFO: Latencies: [314.531376ms 453.173495ms 739.357343ms 783.980106ms 784.108396ms 787.851802ms 789.244511ms 793.003992ms 808.250988ms 810.202616ms 814.495403ms 814.710162ms 817.505446ms 821.767391ms 823.28857ms 826.077113ms 827.990734ms 830.202229ms 838.53642ms 839.023018ms 839.310519ms 839.810082ms 842.100819ms 844.732446ms 844.915466ms 874.081171ms 879.954671ms 887.079993ms 888.81431ms 967.852032ms 993.273742ms 1.039931601s 1.045528172s 1.051486805s 1.064476294s 1.077912s 1.078183216s 1.087115861s 1.100502737s 1.101537144s 1.105024912s 1.117844636s 1.119462065s 1.124424209s 1.12816913s 1.130652641s 1.136472293s 1.137956954s 1.151116878s 1.189299923s 1.208941652s 1.223181421s 1.233248071s 1.2369689s 1.251321403s 1.252902503s 1.287857544s 1.308415735s 1.329497727s 1.330103146s 1.345300358s 1.389624459s 1.390140596s 1.394407392s 1.454586182s 1.454674472s 1.470318627s 1.472866934s 1.478864222s 1.484274843s 1.485607441s 1.48817963s 1.533686s 1.537792743s 1.540323764s 1.67036349s 1.784181684s 2.118809976s 2.125281283s 2.150113238s 2.162567412s 2.172922665s 2.183906519s 2.189132326s 2.246024125s 2.2467148s 2.246869923s 2.261875576s 2.262826743s 2.268274279s 2.277583804s 2.288007056s 2.318843581s 2.324780134s 2.343558943s 2.344447093s 2.363149931s 2.391786954s 2.393397668s 2.492744434s 2.520503904s 2.579249688s 2.639642577s 2.700534551s 3.186507553s 3.241603065s 3.296984951s 3.492267595s 3.543116134s 3.592699763s 3.642824267s 3.683431651s 3.706832329s 3.741271988s 3.753124032s 3.754983062s 3.828004129s 3.843663274s 3.943222229s 4.003964024s 4.110406322s 4.184969903s 4.300468716s 4.333896403s 4.476227701s 4.492621252s 4.56347622s 4.597359047s 4.631494684s 4.635128234s 4.637036979s 4.643196975s 4.684716082s 4.712549506s 4.715172294s 4.739813035s 4.815861141s 4.816555627s 4.967117632s 5.026947171s 5.032351311s 5.098030134s 5.117158934s 5.165611495s 5.167642642s 5.20340466s 5.208264186s 5.226408073s 5.250984417s 5.333645609s 5.342196931s 5.34444112s 5.371181605s 5.391277571s 5.404876089s 5.462559998s 5.486915231s 5.490655557s 5.560534217s 5.580573015s 5.602714623s 5.631486196s 5.634160755s 5.638674718s 5.681394243s 5.714649126s 5.733261566s 5.773425264s 5.82124639s 5.875354432s 5.909011429s 5.911492397s 5.929368452s 5.952540384s 5.956498088s 5.963328734s 6.006740392s 6.014864843s 6.015721407s 6.038975172s 6.0450267s 6.083220669s 6.090459763s 6.161115269s 6.221190026s 6.223755338s 6.257731437s 6.259763897s 6.317647038s 6.344021164s 6.368731975s 6.481174842s 6.584373732s 6.588786063s 6.632847768s 6.809000976s 6.82253152s 6.829139937s 6.931707867s 7.122835427s] May 19 11:55:43.329: INFO: 50 %ile: 2.520503904s May 19 11:55:43.329: INFO: 90 %ile: 6.0450267s May 19 11:55:43.329: INFO: 99 %ile: 6.931707867s May 19 11:55:43.329: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:55:43.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-9jcqg" for this suite. May 19 11:56:09.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:56:09.427: INFO: namespace: e2e-tests-svc-latency-9jcqg, resource: bindings, ignored listing per whitelist May 19 11:56:09.465: INFO: namespace e2e-tests-svc-latency-9jcqg deletion completed in 26.085696789s • [SLOW TEST:75.412 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:56:09.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-bab328fa-99c7-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 11:56:10.835: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb2d7bdc-99c7-11ea-abcb-0242ac110018" in namespace "e2e-tests-configmap-bdnjj" to be "success or failure" May 19 11:56:10.839: INFO: Pod "pod-configmaps-bb2d7bdc-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.8919ms May 19 11:56:12.890: INFO: Pod "pod-configmaps-bb2d7bdc-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054476095s May 19 11:56:14.893: INFO: Pod "pod-configmaps-bb2d7bdc-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057642221s May 19 11:56:16.897: INFO: Pod "pod-configmaps-bb2d7bdc-99c7-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061172584s STEP: Saw pod success May 19 11:56:16.897: INFO: Pod "pod-configmaps-bb2d7bdc-99c7-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:56:16.899: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-bb2d7bdc-99c7-11ea-abcb-0242ac110018 container configmap-volume-test: STEP: delete the pod May 19 11:56:16.936: INFO: Waiting for pod pod-configmaps-bb2d7bdc-99c7-11ea-abcb-0242ac110018 to disappear May 19 11:56:16.945: INFO: Pod pod-configmaps-bb2d7bdc-99c7-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:56:16.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bdnjj" for this suite. May 19 11:56:22.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:56:22.977: INFO: namespace: e2e-tests-configmap-bdnjj, resource: bindings, ignored listing per whitelist May 19 11:56:23.036: INFO: namespace e2e-tests-configmap-bdnjj deletion completed in 6.087648555s • [SLOW TEST:13.571 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:56:23.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-c2a459b4-99c7-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 11:56:23.151: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2a63ab6-99c7-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-jrvwh" to be "success or failure" May 19 11:56:23.155: INFO: Pod "pod-projected-secrets-c2a63ab6-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.733503ms May 19 11:56:25.158: INFO: Pod "pod-projected-secrets-c2a63ab6-99c7-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00728892s May 19 11:56:27.163: INFO: Pod "pod-projected-secrets-c2a63ab6-99c7-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.011888285s May 19 11:56:29.168: INFO: Pod "pod-projected-secrets-c2a63ab6-99c7-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016579649s STEP: Saw pod success May 19 11:56:29.168: INFO: Pod "pod-projected-secrets-c2a63ab6-99c7-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:56:29.171: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-c2a63ab6-99c7-11ea-abcb-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 19 11:56:29.215: INFO: Waiting for pod pod-projected-secrets-c2a63ab6-99c7-11ea-abcb-0242ac110018 to disappear May 19 11:56:29.240: INFO: Pod pod-projected-secrets-c2a63ab6-99c7-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:56:29.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jrvwh" for this suite. May 19 11:56:35.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:56:35.323: INFO: namespace: e2e-tests-projected-jrvwh, resource: bindings, ignored listing per whitelist May 19 11:56:35.340: INFO: namespace e2e-tests-projected-jrvwh deletion completed in 6.096498495s • [SLOW TEST:12.304 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:56:35.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-4xtcf May 19 11:56:39.482: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-4xtcf STEP: checking the pod's current state and verifying that restartCount is present May 19 11:56:39.484: INFO: Initial restart count of pod liveness-http is 0 May 19 11:56:57.522: INFO: Restart count of pod e2e-tests-container-probe-4xtcf/liveness-http is now 1 (18.038074289s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:56:57.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4xtcf" for this suite. May 19 11:57:03.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:57:03.673: INFO: namespace: e2e-tests-container-probe-4xtcf, resource: bindings, ignored listing per whitelist May 19 11:57:03.690: INFO: namespace e2e-tests-container-probe-4xtcf deletion completed in 6.138401659s • [SLOW TEST:28.350 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:57:03.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-dadfb242-99c7-11ea-abcb-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-dadfb2af-99c7-11ea-abcb-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-dadfb242-99c7-11ea-abcb-0242ac110018 STEP: Updating configmap cm-test-opt-upd-dadfb2af-99c7-11ea-abcb-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-dadfb2dd-99c7-11ea-abcb-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:58:21.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-w8982" for this suite. May 19 11:58:45.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:58:45.224: INFO: namespace: e2e-tests-configmap-w8982, resource: bindings, ignored listing per whitelist May 19 11:58:45.247: INFO: namespace e2e-tests-configmap-w8982 deletion completed in 24.092707312s • [SLOW TEST:101.557 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:58:45.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 19 11:58:49.884: INFO: Successfully updated pod "annotationupdate1768c658-99c8-11ea-abcb-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:58:53.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-99zvm" for this suite. May 19 11:59:15.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:59:15.990: INFO: namespace: e2e-tests-downward-api-99zvm, resource: bindings, ignored listing per whitelist May 19 11:59:15.994: INFO: namespace e2e-tests-downward-api-99zvm deletion completed in 22.078642414s • [SLOW TEST:30.746 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:59:15.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 11:59:16.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29bdb1e6-99c8-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-nl652" to be "success or failure" May 19 11:59:16.140: INFO: Pod "downwardapi-volume-29bdb1e6-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.263641ms May 19 11:59:18.474: INFO: Pod "downwardapi-volume-29bdb1e6-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361891661s May 19 11:59:20.478: INFO: Pod "downwardapi-volume-29bdb1e6-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365424651s May 19 11:59:22.482: INFO: Pod "downwardapi-volume-29bdb1e6-99c8-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.370143188s STEP: Saw pod success May 19 11:59:22.482: INFO: Pod "downwardapi-volume-29bdb1e6-99c8-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 11:59:22.485: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-29bdb1e6-99c8-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 11:59:22.512: INFO: Waiting for pod downwardapi-volume-29bdb1e6-99c8-11ea-abcb-0242ac110018 to disappear May 19 11:59:22.517: INFO: Pod downwardapi-volume-29bdb1e6-99c8-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 11:59:22.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nl652" for this suite. May 19 11:59:28.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 11:59:28.748: INFO: namespace: e2e-tests-projected-nl652, resource: bindings, ignored listing per whitelist May 19 11:59:28.919: INFO: namespace e2e-tests-projected-nl652 deletion completed in 6.398554708s • [SLOW TEST:12.925 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 11:59:28.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0519 12:00:10.112513 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 12:00:10.112: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:00:10.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tqqnk" for this suite. May 19 12:00:18.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:00:18.169: INFO: namespace: e2e-tests-gc-tqqnk, resource: bindings, ignored listing per whitelist May 19 12:00:18.187: INFO: namespace e2e-tests-gc-tqqnk deletion completed in 8.070849339s • [SLOW TEST:49.267 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:00:18.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-4eedd188-99c8-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 12:00:18.511: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4eef0701-99c8-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-cbqt8" to be "success or failure" May 19 12:00:18.545: INFO: Pod "pod-projected-secrets-4eef0701-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.359658ms May 19 12:00:20.575: INFO: Pod "pod-projected-secrets-4eef0701-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063512205s May 19 12:00:22.581: INFO: Pod "pod-projected-secrets-4eef0701-99c8-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.069067421s May 19 12:00:24.611: INFO: Pod "pod-projected-secrets-4eef0701-99c8-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099339817s STEP: Saw pod success May 19 12:00:24.611: INFO: Pod "pod-projected-secrets-4eef0701-99c8-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:00:24.614: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-4eef0701-99c8-11ea-abcb-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 19 12:00:24.640: INFO: Waiting for pod pod-projected-secrets-4eef0701-99c8-11ea-abcb-0242ac110018 to disappear May 19 12:00:24.663: INFO: Pod pod-projected-secrets-4eef0701-99c8-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:00:24.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cbqt8" for this suite. May 19 12:00:30.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:00:30.807: INFO: namespace: e2e-tests-projected-cbqt8, resource: bindings, ignored listing per whitelist May 19 12:00:30.820: INFO: namespace e2e-tests-projected-cbqt8 deletion completed in 6.153270873s • [SLOW TEST:12.633 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:00:30.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:00:30.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-5jhzg" for this suite. May 19 12:00:37.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:00:37.032: INFO: namespace: e2e-tests-services-5jhzg, resource: bindings, ignored listing per whitelist May 19 12:00:37.085: INFO: namespace e2e-tests-services-5jhzg deletion completed in 6.094379891s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.265 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:00:37.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 19 12:00:37.161: INFO: PodSpec: initContainers in spec.initContainers May 19 12:01:32.096: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5a0e3d2f-99c8-11ea-abcb-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-lmf4n", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-lmf4n/pods/pod-init-5a0e3d2f-99c8-11ea-abcb-0242ac110018", UID:"5a0fff10-99c8-11ea-99e8-0242ac110002", ResourceVersion:"11400109", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725486437, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"161311272"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-q5ttd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002902380), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q5ttd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q5ttd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q5ttd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024c8ac8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00286a2a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024c8c50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024c8c70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0024c8c78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024c8c7c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725486437, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725486437, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725486437, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725486437, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.241", StartTime:(*v1.Time)(0xc0029000e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002847a40)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002847ab0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://99ccc58548d438ffe27064a4d7ab7d978d0b7b96c10f7dbd64dac0467ea20cd3"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002900140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002900100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:01:32.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-lmf4n" for this suite. May 19 12:01:54.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:01:54.242: INFO: namespace: e2e-tests-init-container-lmf4n, resource: bindings, ignored listing per whitelist May 19 12:01:54.248: INFO: namespace e2e-tests-init-container-lmf4n deletion completed in 22.083935896s • [SLOW TEST:77.163 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:01:54.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-884564d4-99c8-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 12:01:54.848: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8849b1aa-99c8-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-rbv29" to be "success or failure" May 19 12:01:55.049: INFO: Pod "pod-projected-configmaps-8849b1aa-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 201.415705ms May 19 12:01:57.053: INFO: Pod "pod-projected-configmaps-8849b1aa-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205528387s May 19 12:01:59.056: INFO: Pod "pod-projected-configmaps-8849b1aa-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20887189s May 19 12:02:01.097: INFO: Pod "pod-projected-configmaps-8849b1aa-99c8-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249734384s STEP: Saw pod success May 19 12:02:01.097: INFO: Pod "pod-projected-configmaps-8849b1aa-99c8-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:02:01.099: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-8849b1aa-99c8-11ea-abcb-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 19 12:02:01.159: INFO: Waiting for pod pod-projected-configmaps-8849b1aa-99c8-11ea-abcb-0242ac110018 to disappear May 19 12:02:01.367: INFO: Pod pod-projected-configmaps-8849b1aa-99c8-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:02:01.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rbv29" for this suite. May 19 12:02:07.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:02:07.432: INFO: namespace: e2e-tests-projected-rbv29, resource: bindings, ignored listing per whitelist May 19 12:02:07.469: INFO: namespace e2e-tests-projected-rbv29 deletion completed in 6.098301964s • [SLOW TEST:13.221 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:02:07.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 12:02:07.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-5b5ch' May 19 12:02:11.272: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 12:02:11.272: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 19 12:02:13.298: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-jdvh8] May 19 12:02:13.298: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-jdvh8" in namespace "e2e-tests-kubectl-5b5ch" to be "running and ready" May 19 12:02:13.301: INFO: Pod "e2e-test-nginx-rc-jdvh8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.126335ms May 19 12:02:15.304: INFO: Pod "e2e-test-nginx-rc-jdvh8": Phase="Running", Reason="", readiness=true. Elapsed: 2.006294094s May 19 12:02:15.304: INFO: Pod "e2e-test-nginx-rc-jdvh8" satisfied condition "running and ready" May 19 12:02:15.304: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-jdvh8] May 19 12:02:15.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5b5ch' May 19 12:02:15.420: INFO: stderr: "" May 19 12:02:15.420: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 19 12:02:15.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5b5ch' May 19 12:02:15.544: INFO: stderr: "" May 19 12:02:15.544: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:02:15.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5b5ch" for this suite. May 19 12:02:39.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:02:39.585: INFO: namespace: e2e-tests-kubectl-5b5ch, resource: bindings, ignored listing per whitelist May 19 12:02:39.650: INFO: namespace e2e-tests-kubectl-5b5ch deletion completed in 24.101589946s • [SLOW TEST:32.180 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:02:39.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-a33374d5-99c8-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 12:02:39.895: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a334cb4b-99c8-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-pl9qs" to be "success or failure" May 19 12:02:39.899: INFO: Pod "pod-projected-secrets-a334cb4b-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.783838ms May 19 12:02:42.015: INFO: Pod "pod-projected-secrets-a334cb4b-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119781897s May 19 12:02:44.018: INFO: Pod "pod-projected-secrets-a334cb4b-99c8-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123449279s STEP: Saw pod success May 19 12:02:44.018: INFO: Pod "pod-projected-secrets-a334cb4b-99c8-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:02:44.021: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-a334cb4b-99c8-11ea-abcb-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 19 12:02:44.119: INFO: Waiting for pod pod-projected-secrets-a334cb4b-99c8-11ea-abcb-0242ac110018 to disappear May 19 12:02:44.126: INFO: Pod pod-projected-secrets-a334cb4b-99c8-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:02:44.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pl9qs" for this suite. May 19 12:02:50.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:02:50.178: INFO: namespace: e2e-tests-projected-pl9qs, resource: bindings, ignored listing per whitelist May 19 12:02:50.226: INFO: namespace e2e-tests-projected-pl9qs deletion completed in 6.096593617s • [SLOW TEST:10.576 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:02:50.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 19 12:02:50.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zggb4' May 19 12:02:50.574: INFO: stderr: "" May 19 12:02:50.574: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 19 12:02:51.579: INFO: Selector matched 1 pods for map[app:redis] May 19 12:02:51.579: INFO: Found 0 / 1 May 19 12:02:52.578: INFO: Selector matched 1 pods for map[app:redis] May 19 12:02:52.578: INFO: Found 0 / 1 May 19 12:02:53.577: INFO: Selector matched 1 pods for map[app:redis] May 19 12:02:53.577: INFO: Found 0 / 1 May 19 12:02:54.580: INFO: Selector matched 1 pods for map[app:redis] May 19 12:02:54.580: INFO: Found 1 / 1 May 19 12:02:54.580: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 12:02:54.613: INFO: Selector matched 1 pods for map[app:redis] May 19 12:02:54.613: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 19 12:02:54.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4ctzt redis-master --namespace=e2e-tests-kubectl-zggb4' May 19 12:02:54.731: INFO: stderr: "" May 19 12:02:54.731: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 May 12:02:53.530 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 May 12:02:53.530 # Server started, Redis version 3.2.12\n1:M 19 May 12:02:53.530 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 May 12:02:53.530 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 19 12:02:54.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ctzt redis-master --namespace=e2e-tests-kubectl-zggb4 --tail=1' May 19 12:02:54.847: INFO: stderr: "" May 19 12:02:54.848: INFO: stdout: "1:M 19 May 12:02:53.530 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 19 12:02:54.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ctzt redis-master --namespace=e2e-tests-kubectl-zggb4 --limit-bytes=1' May 19 12:02:54.949: INFO: stderr: "" May 19 12:02:54.949: INFO: stdout: " " STEP: exposing timestamps May 19 12:02:54.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ctzt redis-master --namespace=e2e-tests-kubectl-zggb4 --tail=1 --timestamps' May 19 12:02:55.051: INFO: stderr: "" May 19 12:02:55.051: INFO: stdout: "2020-05-19T12:02:53.530918677Z 1:M 19 May 12:02:53.530 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 19 12:02:57.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ctzt redis-master --namespace=e2e-tests-kubectl-zggb4 --since=1s' May 19 12:02:57.666: INFO: stderr: "" May 19 12:02:57.666: INFO: stdout: "" May 19 12:02:57.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ctzt redis-master --namespace=e2e-tests-kubectl-zggb4 --since=24h' May 19 12:02:57.773: INFO: stderr: "" May 19 12:02:57.773: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 May 12:02:53.530 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 May 12:02:53.530 # Server started, Redis version 3.2.12\n1:M 19 May 12:02:53.530 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 May 12:02:53.530 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 19 12:02:57.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zggb4' May 19 12:02:57.869: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 12:02:57.869: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 19 12:02:57.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-zggb4' May 19 12:02:57.968: INFO: stderr: "No resources found.\n" May 19 12:02:57.968: INFO: stdout: "" May 19 12:02:57.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-zggb4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 12:02:58.082: INFO: stderr: "" May 19 12:02:58.082: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:02:58.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zggb4" for this suite. May 19 12:03:18.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:03:18.150: INFO: namespace: e2e-tests-kubectl-zggb4, resource: bindings, ignored listing per whitelist May 19 12:03:18.286: INFO: namespace e2e-tests-kubectl-zggb4 deletion completed in 20.200901702s • [SLOW TEST:28.060 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:03:18.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 19 12:03:18.444: INFO: Waiting up to 5m0s for pod "downward-api-ba2eba9b-99c8-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-fpt8m" to be "success or failure" May 19 12:03:18.459: INFO: Pod "downward-api-ba2eba9b-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.523383ms May 19 12:03:20.602: INFO: Pod "downward-api-ba2eba9b-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157938128s May 19 12:03:22.632: INFO: Pod "downward-api-ba2eba9b-99c8-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.18744275s May 19 12:03:24.636: INFO: Pod "downward-api-ba2eba9b-99c8-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.191716304s STEP: Saw pod success May 19 12:03:24.636: INFO: Pod "downward-api-ba2eba9b-99c8-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:03:24.639: INFO: Trying to get logs from node hunter-worker2 pod downward-api-ba2eba9b-99c8-11ea-abcb-0242ac110018 container dapi-container: STEP: delete the pod May 19 12:03:24.902: INFO: Waiting for pod downward-api-ba2eba9b-99c8-11ea-abcb-0242ac110018 to disappear May 19 12:03:25.057: INFO: Pod downward-api-ba2eba9b-99c8-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:03:25.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fpt8m" for this suite. May 19 12:03:31.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:03:31.358: INFO: namespace: e2e-tests-downward-api-fpt8m, resource: bindings, ignored listing per whitelist May 19 12:03:31.385: INFO: namespace e2e-tests-downward-api-fpt8m deletion completed in 6.323637888s • [SLOW TEST:13.098 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:03:31.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 19 12:03:38.965: INFO: 9 pods remaining May 19 12:03:38.965: INFO: 0 pods has nil DeletionTimestamp May 19 12:03:38.965: INFO: May 19 12:03:40.165: INFO: 0 pods remaining May 19 12:03:40.165: INFO: 0 pods has nil DeletionTimestamp May 19 12:03:40.166: INFO: May 19 12:03:40.790: INFO: 0 pods remaining May 19 12:03:40.790: INFO: 0 pods has nil DeletionTimestamp May 19 12:03:40.790: INFO: STEP: Gathering metrics W0519 12:03:41.689088 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 12:03:41.689: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:03:41.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jvgjg" for this suite. May 19 12:03:47.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:03:48.029: INFO: namespace: e2e-tests-gc-jvgjg, resource: bindings, ignored listing per whitelist May 19 12:03:48.037: INFO: namespace e2e-tests-gc-jvgjg deletion completed in 6.344758833s • [SLOW TEST:16.652 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:03:48.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 12:03:48.161: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:03:52.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vf5c5" for this suite. May 19 12:04:32.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:04:32.509: INFO: namespace: e2e-tests-pods-vf5c5, resource: bindings, ignored listing per whitelist May 19 12:04:32.525: INFO: namespace e2e-tests-pods-vf5c5 deletion completed in 40.166807427s • [SLOW TEST:44.488 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:04:32.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-qh5k STEP: Creating a pod to test atomic-volume-subpath May 19 12:04:32.714: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qh5k" in namespace "e2e-tests-subpath-fg6nh" to be "success or failure" May 19 12:04:32.718: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326783ms May 19 12:04:35.003: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289690034s May 19 12:04:37.043: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329010852s May 19 12:04:39.076: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362266542s May 19 12:04:41.079: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Running", Reason="", readiness=false. Elapsed: 8.365642348s May 19 12:04:43.084: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Running", Reason="", readiness=false. Elapsed: 10.370234603s May 19 12:04:45.088: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Running", Reason="", readiness=false. Elapsed: 12.374090409s May 19 12:04:47.092: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Running", Reason="", readiness=false. Elapsed: 14.378212105s May 19 12:04:49.096: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Running", Reason="", readiness=false. Elapsed: 16.38265068s May 19 12:04:51.101: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Running", Reason="", readiness=false. Elapsed: 18.387013078s May 19 12:04:53.105: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Running", Reason="", readiness=false. Elapsed: 20.391891266s May 19 12:04:55.117: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Running", Reason="", readiness=false. Elapsed: 22.403206122s May 19 12:04:57.121: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Running", Reason="", readiness=false. Elapsed: 24.407375013s May 19 12:04:59.126: INFO: Pod "pod-subpath-test-projected-qh5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.412851301s STEP: Saw pod success May 19 12:04:59.127: INFO: Pod "pod-subpath-test-projected-qh5k" satisfied condition "success or failure" May 19 12:04:59.130: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-qh5k container test-container-subpath-projected-qh5k: STEP: delete the pod May 19 12:04:59.347: INFO: Waiting for pod pod-subpath-test-projected-qh5k to disappear May 19 12:04:59.425: INFO: Pod pod-subpath-test-projected-qh5k no longer exists STEP: Deleting pod pod-subpath-test-projected-qh5k May 19 12:04:59.425: INFO: Deleting pod "pod-subpath-test-projected-qh5k" in namespace "e2e-tests-subpath-fg6nh" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:04:59.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-fg6nh" for this suite. May 19 12:05:05.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:05:05.514: INFO: namespace: e2e-tests-subpath-fg6nh, resource: bindings, ignored listing per whitelist May 19 12:05:05.542: INFO: namespace e2e-tests-subpath-fg6nh deletion completed in 6.111703654s • [SLOW TEST:33.017 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:05:05.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 12:05:09.788: INFO: Waiting up to 5m0s for pod "client-envvars-fc8d1438-99c8-11ea-abcb-0242ac110018" in namespace "e2e-tests-pods-c5fqn" to be "success or failure" May 19 12:05:09.835: INFO: Pod "client-envvars-fc8d1438-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.202209ms May 19 12:05:11.838: INFO: Pod "client-envvars-fc8d1438-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049838504s May 19 12:05:13.842: INFO: Pod "client-envvars-fc8d1438-99c8-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053660595s May 19 12:05:15.846: INFO: Pod "client-envvars-fc8d1438-99c8-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057201774s STEP: Saw pod success May 19 12:05:15.846: INFO: Pod "client-envvars-fc8d1438-99c8-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:05:15.848: INFO: Trying to get logs from node hunter-worker pod client-envvars-fc8d1438-99c8-11ea-abcb-0242ac110018 container env3cont: STEP: delete the pod May 19 12:05:15.886: INFO: Waiting for pod client-envvars-fc8d1438-99c8-11ea-abcb-0242ac110018 to disappear May 19 12:05:15.903: INFO: Pod client-envvars-fc8d1438-99c8-11ea-abcb-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:05:15.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-c5fqn" for this suite. May 19 12:05:55.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:05:56.207: INFO: namespace: e2e-tests-pods-c5fqn, resource: bindings, ignored listing per whitelist May 19 12:05:56.231: INFO: namespace e2e-tests-pods-c5fqn deletion completed in 40.324643227s • [SLOW TEST:50.689 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:05:56.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-186bc960-99c9-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 12:05:56.570: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-186d2ef7-99c9-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-wr9hn" to be "success or failure" May 19 12:05:56.580: INFO: Pod "pod-projected-secrets-186d2ef7-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.203844ms May 19 12:05:58.584: INFO: Pod "pod-projected-secrets-186d2ef7-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013185203s May 19 12:06:00.634: INFO: Pod "pod-projected-secrets-186d2ef7-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063696382s May 19 12:06:02.637: INFO: Pod "pod-projected-secrets-186d2ef7-99c9-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066934872s STEP: Saw pod success May 19 12:06:02.637: INFO: Pod "pod-projected-secrets-186d2ef7-99c9-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:06:02.640: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-186d2ef7-99c9-11ea-abcb-0242ac110018 container secret-volume-test: STEP: delete the pod May 19 12:06:02.674: INFO: Waiting for pod pod-projected-secrets-186d2ef7-99c9-11ea-abcb-0242ac110018 to disappear May 19 12:06:02.766: INFO: Pod pod-projected-secrets-186d2ef7-99c9-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:06:02.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wr9hn" for this suite. May 19 12:06:08.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:06:08.847: INFO: namespace: e2e-tests-projected-wr9hn, resource: bindings, ignored listing per whitelist May 19 12:06:08.866: INFO: namespace e2e-tests-projected-wr9hn deletion completed in 6.095887188s • [SLOW TEST:12.635 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:06:08.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 12:06:08.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fcee7f4-99c9-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-4v2gl" to be "success or failure" May 19 12:06:08.963: INFO: Pod "downwardapi-volume-1fcee7f4-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.579224ms May 19 12:06:10.967: INFO: Pod "downwardapi-volume-1fcee7f4-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014202431s May 19 12:06:12.970: INFO: Pod "downwardapi-volume-1fcee7f4-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01783838s May 19 12:06:14.974: INFO: Pod "downwardapi-volume-1fcee7f4-99c9-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0219227s STEP: Saw pod success May 19 12:06:14.974: INFO: Pod "downwardapi-volume-1fcee7f4-99c9-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:06:14.977: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1fcee7f4-99c9-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 12:06:15.019: INFO: Waiting for pod downwardapi-volume-1fcee7f4-99c9-11ea-abcb-0242ac110018 to disappear May 19 12:06:15.038: INFO: Pod downwardapi-volume-1fcee7f4-99c9-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:06:15.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4v2gl" for this suite. May 19 12:06:21.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:06:21.083: INFO: namespace: e2e-tests-downward-api-4v2gl, resource: bindings, ignored listing per whitelist May 19 12:06:21.123: INFO: namespace e2e-tests-downward-api-4v2gl deletion completed in 6.080480773s • [SLOW TEST:12.256 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:06:21.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 12:06:21.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-271f94b1-99c9-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-zc8ct" to be "success or failure" May 19 12:06:21.243: INFO: Pod "downwardapi-volume-271f94b1-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.228654ms May 19 12:06:23.247: INFO: Pod "downwardapi-volume-271f94b1-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026798456s May 19 12:06:25.251: INFO: Pod "downwardapi-volume-271f94b1-99c9-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031100769s STEP: Saw pod success May 19 12:06:25.251: INFO: Pod "downwardapi-volume-271f94b1-99c9-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:06:25.254: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-271f94b1-99c9-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 12:06:25.336: INFO: Waiting for pod downwardapi-volume-271f94b1-99c9-11ea-abcb-0242ac110018 to disappear May 19 12:06:25.365: INFO: Pod downwardapi-volume-271f94b1-99c9-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:06:25.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zc8ct" for this suite. May 19 12:06:31.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:06:31.590: INFO: namespace: e2e-tests-downward-api-zc8ct, resource: bindings, ignored listing per whitelist May 19 12:06:31.603: INFO: namespace e2e-tests-downward-api-zc8ct deletion completed in 6.233725838s • [SLOW TEST:10.480 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:06:31.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-qlp6p May 19 12:06:35.722: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-qlp6p STEP: checking the pod's current state and verifying that restartCount is present May 19 12:06:35.723: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:10:37.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-qlp6p" for this suite. May 19 12:10:43.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:10:43.640: INFO: namespace: e2e-tests-container-probe-qlp6p, resource: bindings, ignored listing per whitelist May 19 12:10:43.687: INFO: namespace e2e-tests-container-probe-qlp6p deletion completed in 6.071594886s • [SLOW TEST:252.084 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:10:43.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-c39e052e-99c9-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 12:10:43.782: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c39f88a9-99c9-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-mwfhf" to be "success or failure" May 19 12:10:43.801: INFO: Pod "pod-projected-configmaps-c39f88a9-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.058372ms May 19 12:10:45.985: INFO: Pod "pod-projected-configmaps-c39f88a9-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20251549s May 19 12:10:47.989: INFO: Pod "pod-projected-configmaps-c39f88a9-99c9-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206114187s STEP: Saw pod success May 19 12:10:47.989: INFO: Pod "pod-projected-configmaps-c39f88a9-99c9-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:10:47.992: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-c39f88a9-99c9-11ea-abcb-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 19 12:10:48.036: INFO: Waiting for pod pod-projected-configmaps-c39f88a9-99c9-11ea-abcb-0242ac110018 to disappear May 19 12:10:48.063: INFO: Pod pod-projected-configmaps-c39f88a9-99c9-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:10:48.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mwfhf" for this suite. May 19 12:10:54.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:10:54.127: INFO: namespace: e2e-tests-projected-mwfhf, resource: bindings, ignored listing per whitelist May 19 12:10:54.161: INFO: namespace e2e-tests-projected-mwfhf deletion completed in 6.093830376s • [SLOW TEST:10.474 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:10:54.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 19 12:11:00.292: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-c9dcb14a-99c9-11ea-abcb-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-bltpg", SelfLink:"/api/v1/namespaces/e2e-tests-pods-bltpg/pods/pod-submit-remove-c9dcb14a-99c9-11ea-abcb-0242ac110018", UID:"c9df3e7c-99c9-11ea-99e8-0242ac110002", ResourceVersion:"11401726", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725487054, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"238148853"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tfc6x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0028a4600), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tfc6x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00287e8d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002756120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00287e920)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00287e940)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00287e948), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00287e94c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725487054, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725487058, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725487058, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725487054, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.12", StartTime:(*v1.Time)(0xc0028ea4a0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0028ea4c0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://b832aa93b51fe6a1a51b52356f0ab5417beec3a370c0575bab5ad5a5b73374d9"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:11:11.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bltpg" for this suite. May 19 12:11:17.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:11:17.854: INFO: namespace: e2e-tests-pods-bltpg, resource: bindings, ignored listing per whitelist May 19 12:11:17.860: INFO: namespace e2e-tests-pods-bltpg deletion completed in 6.103580562s • [SLOW TEST:23.699 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:11:17.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 19 12:11:17.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-lv2r9 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 19 12:11:21.680: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0519 12:11:21.611148 2902 log.go:172] (0xc00059a0b0) (0xc000918140) Create stream\nI0519 12:11:21.611224 2902 log.go:172] (0xc00059a0b0) (0xc000918140) Stream added, broadcasting: 1\nI0519 12:11:21.613722 2902 log.go:172] (0xc00059a0b0) Reply frame received for 1\nI0519 12:11:21.613791 2902 log.go:172] (0xc00059a0b0) (0xc0009181e0) Create stream\nI0519 12:11:21.613805 2902 log.go:172] (0xc00059a0b0) (0xc0009181e0) Stream added, broadcasting: 3\nI0519 12:11:21.614676 2902 log.go:172] (0xc00059a0b0) Reply frame received for 3\nI0519 12:11:21.614737 2902 log.go:172] (0xc00059a0b0) (0xc00089c5a0) Create stream\nI0519 12:11:21.614748 2902 log.go:172] (0xc00059a0b0) (0xc00089c5a0) Stream added, broadcasting: 5\nI0519 12:11:21.615535 2902 log.go:172] (0xc00059a0b0) Reply frame received for 5\nI0519 12:11:21.615573 2902 log.go:172] (0xc00059a0b0) (0xc00021b4a0) Create stream\nI0519 12:11:21.615583 2902 log.go:172] (0xc00059a0b0) (0xc00021b4a0) Stream added, broadcasting: 7\nI0519 12:11:21.616385 2902 log.go:172] (0xc00059a0b0) Reply frame received for 7\nI0519 12:11:21.616588 2902 log.go:172] (0xc0009181e0) (3) Writing data frame\nI0519 12:11:21.616729 2902 log.go:172] (0xc0009181e0) (3) Writing data frame\nI0519 12:11:21.617657 2902 log.go:172] (0xc00059a0b0) Data frame received for 5\nI0519 12:11:21.617677 2902 log.go:172] (0xc00089c5a0) (5) Data frame handling\nI0519 12:11:21.617693 2902 log.go:172] (0xc00089c5a0) (5) Data frame sent\nI0519 12:11:21.618396 2902 log.go:172] (0xc00059a0b0) Data frame received for 5\nI0519 12:11:21.618419 2902 log.go:172] (0xc00089c5a0) (5) Data frame handling\nI0519 12:11:21.618436 2902 log.go:172] (0xc00089c5a0) (5) Data frame sent\nI0519 12:11:21.655066 2902 log.go:172] (0xc00059a0b0) Data frame received for 5\nI0519 12:11:21.655104 2902 log.go:172] (0xc00059a0b0) Data frame received for 7\nI0519 12:11:21.655146 2902 log.go:172] (0xc00021b4a0) (7) Data frame handling\nI0519 12:11:21.655185 2902 log.go:172] (0xc00089c5a0) (5) Data frame handling\nI0519 12:11:21.655546 2902 log.go:172] (0xc00059a0b0) Data frame received for 1\nI0519 12:11:21.655578 2902 log.go:172] (0xc000918140) (1) Data frame handling\nI0519 12:11:21.655624 2902 log.go:172] (0xc000918140) (1) Data frame sent\nI0519 12:11:21.655656 2902 log.go:172] (0xc00059a0b0) (0xc000918140) Stream removed, broadcasting: 1\nI0519 12:11:21.655702 2902 log.go:172] (0xc00059a0b0) (0xc0009181e0) Stream removed, broadcasting: 3\nI0519 12:11:21.655798 2902 log.go:172] (0xc00059a0b0) (0xc000918140) Stream removed, broadcasting: 1\nI0519 12:11:21.655848 2902 log.go:172] (0xc00059a0b0) (0xc0009181e0) Stream removed, broadcasting: 3\nI0519 12:11:21.655871 2902 log.go:172] (0xc00059a0b0) (0xc00089c5a0) Stream removed, broadcasting: 5\nI0519 12:11:21.655890 2902 log.go:172] (0xc00059a0b0) (0xc00021b4a0) Stream removed, broadcasting: 7\nI0519 12:11:21.656051 2902 log.go:172] (0xc00059a0b0) Go away received\n" May 19 12:11:21.680: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:11:23.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lv2r9" for this suite. May 19 12:11:33.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:11:33.764: INFO: namespace: e2e-tests-kubectl-lv2r9, resource: bindings, ignored listing per whitelist May 19 12:11:33.782: INFO: namespace e2e-tests-kubectl-lv2r9 deletion completed in 10.090086189s • [SLOW TEST:15.922 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:11:33.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 19 12:11:33.947: INFO: Waiting up to 5m0s for pod "downward-api-e1854ea6-99c9-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-pr9cl" to be "success or failure" May 19 12:11:33.950: INFO: Pod "downward-api-e1854ea6-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.177112ms May 19 12:11:35.954: INFO: Pod "downward-api-e1854ea6-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007140652s May 19 12:11:37.974: INFO: Pod "downward-api-e1854ea6-99c9-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027673595s STEP: Saw pod success May 19 12:11:37.975: INFO: Pod "downward-api-e1854ea6-99c9-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:11:37.978: INFO: Trying to get logs from node hunter-worker2 pod downward-api-e1854ea6-99c9-11ea-abcb-0242ac110018 container dapi-container: STEP: delete the pod May 19 12:11:38.001: INFO: Waiting for pod downward-api-e1854ea6-99c9-11ea-abcb-0242ac110018 to disappear May 19 12:11:38.005: INFO: Pod downward-api-e1854ea6-99c9-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:11:38.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pr9cl" for this suite. May 19 12:11:44.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:11:44.062: INFO: namespace: e2e-tests-downward-api-pr9cl, resource: bindings, ignored listing per whitelist May 19 12:11:44.097: INFO: namespace e2e-tests-downward-api-pr9cl deletion completed in 6.089134103s • [SLOW TEST:10.315 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:11:44.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 12:11:44.169: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:11:45.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-mwtkg" for this suite. May 19 12:11:51.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:11:51.279: INFO: namespace: e2e-tests-custom-resource-definition-mwtkg, resource: bindings, ignored listing per whitelist May 19 12:11:51.324: INFO: namespace e2e-tests-custom-resource-definition-mwtkg deletion completed in 6.092793377s • [SLOW TEST:7.227 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:11:51.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ebf57ed0-99c9-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 12:11:51.455: INFO: Waiting up to 5m0s for pod "pod-secrets-ebf63354-99c9-11ea-abcb-0242ac110018" in namespace "e2e-tests-secrets-8qwk5" to be "success or failure" May 19 12:11:51.470: INFO: Pod "pod-secrets-ebf63354-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.35081ms May 19 12:11:53.474: INFO: Pod "pod-secrets-ebf63354-99c9-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019101719s May 19 12:11:55.478: INFO: Pod "pod-secrets-ebf63354-99c9-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.023001615s May 19 12:11:57.482: INFO: Pod "pod-secrets-ebf63354-99c9-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026702694s STEP: Saw pod success May 19 12:11:57.482: INFO: Pod "pod-secrets-ebf63354-99c9-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:11:57.484: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ebf63354-99c9-11ea-abcb-0242ac110018 container secret-volume-test: STEP: delete the pod May 19 12:11:57.503: INFO: Waiting for pod pod-secrets-ebf63354-99c9-11ea-abcb-0242ac110018 to disappear May 19 12:11:57.507: INFO: Pod pod-secrets-ebf63354-99c9-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:11:57.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8qwk5" for this suite. May 19 12:12:03.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:12:03.606: INFO: namespace: e2e-tests-secrets-8qwk5, resource: bindings, ignored listing per whitelist May 19 12:12:03.618: INFO: namespace e2e-tests-secrets-8qwk5 deletion completed in 6.107636168s • [SLOW TEST:12.294 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:12:03.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 19 12:12:08.297: INFO: Successfully updated pod "labelsupdatef343a6da-99c9-11ea-abcb-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:12:12.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-49qrn" for this suite. May 19 12:12:34.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:12:34.389: INFO: namespace: e2e-tests-downward-api-49qrn, resource: bindings, ignored listing per whitelist May 19 12:12:34.447: INFO: namespace e2e-tests-downward-api-49qrn deletion completed in 22.118373356s • [SLOW TEST:30.829 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:12:34.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-05a295d3-99ca-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 12:12:34.565: INFO: Waiting up to 5m0s for pod "pod-configmaps-05a4ac40-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-configmap-wcb5r" to be "success or failure" May 19 12:12:34.587: INFO: Pod "pod-configmaps-05a4ac40-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.038158ms May 19 12:12:36.730: INFO: Pod "pod-configmaps-05a4ac40-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165390057s May 19 12:12:38.733: INFO: Pod "pod-configmaps-05a4ac40-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16821254s STEP: Saw pod success May 19 12:12:38.733: INFO: Pod "pod-configmaps-05a4ac40-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:12:38.735: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-05a4ac40-99ca-11ea-abcb-0242ac110018 container configmap-volume-test: STEP: delete the pod May 19 12:12:38.842: INFO: Waiting for pod pod-configmaps-05a4ac40-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:12:38.850: INFO: Pod pod-configmaps-05a4ac40-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:12:38.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wcb5r" for this suite. May 19 12:12:44.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:12:44.907: INFO: namespace: e2e-tests-configmap-wcb5r, resource: bindings, ignored listing per whitelist May 19 12:12:44.928: INFO: namespace e2e-tests-configmap-wcb5r deletion completed in 6.075529422s • [SLOW TEST:10.481 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:12:44.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 12:12:45.032: INFO: Creating ReplicaSet my-hostname-basic-0be6a44c-99ca-11ea-abcb-0242ac110018 May 19 12:12:45.048: INFO: Pod name my-hostname-basic-0be6a44c-99ca-11ea-abcb-0242ac110018: Found 0 pods out of 1 May 19 12:12:50.068: INFO: Pod name my-hostname-basic-0be6a44c-99ca-11ea-abcb-0242ac110018: Found 1 pods out of 1 May 19 12:12:50.068: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0be6a44c-99ca-11ea-abcb-0242ac110018" is running May 19 12:12:50.070: INFO: Pod "my-hostname-basic-0be6a44c-99ca-11ea-abcb-0242ac110018-w8ckc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 12:12:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 12:12:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 12:12:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 12:12:45 +0000 UTC Reason: Message:}]) May 19 12:12:50.070: INFO: Trying to dial the pod May 19 12:12:55.081: INFO: Controller my-hostname-basic-0be6a44c-99ca-11ea-abcb-0242ac110018: Got expected result from replica 1 [my-hostname-basic-0be6a44c-99ca-11ea-abcb-0242ac110018-w8ckc]: "my-hostname-basic-0be6a44c-99ca-11ea-abcb-0242ac110018-w8ckc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:12:55.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-2ng7l" for this suite. May 19 12:13:01.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:13:01.169: INFO: namespace: e2e-tests-replicaset-2ng7l, resource: bindings, ignored listing per whitelist May 19 12:13:01.180: INFO: namespace e2e-tests-replicaset-2ng7l deletion completed in 6.096015182s • [SLOW TEST:16.252 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:13:01.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-1598f2c4-99ca-11ea-abcb-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-1598f310-99ca-11ea-abcb-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1598f2c4-99ca-11ea-abcb-0242ac110018 STEP: Updating configmap cm-test-opt-upd-1598f310-99ca-11ea-abcb-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-1598f333-99ca-11ea-abcb-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:13:09.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-smrp2" for this suite. May 19 12:13:31.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:13:31.476: INFO: namespace: e2e-tests-projected-smrp2, resource: bindings, ignored listing per whitelist May 19 12:13:31.509: INFO: namespace e2e-tests-projected-smrp2 deletion completed in 22.080631162s • [SLOW TEST:30.329 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:13:31.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-27ae6929-99ca-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 12:13:31.699: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-27af9599-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-7dkcg" to be "success or failure" May 19 12:13:31.702: INFO: Pod "pod-projected-secrets-27af9599-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.932807ms May 19 12:13:33.708: INFO: Pod "pod-projected-secrets-27af9599-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009260991s May 19 12:13:35.712: INFO: Pod "pod-projected-secrets-27af9599-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013878654s STEP: Saw pod success May 19 12:13:35.713: INFO: Pod "pod-projected-secrets-27af9599-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:13:35.715: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-27af9599-99ca-11ea-abcb-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 19 12:13:35.748: INFO: Waiting for pod pod-projected-secrets-27af9599-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:13:35.772: INFO: Pod pod-projected-secrets-27af9599-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:13:35.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7dkcg" for this suite. May 19 12:13:41.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:13:41.816: INFO: namespace: e2e-tests-projected-7dkcg, resource: bindings, ignored listing per whitelist May 19 12:13:41.877: INFO: namespace e2e-tests-projected-7dkcg deletion completed in 6.10116766s • [SLOW TEST:10.367 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:13:41.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 19 12:13:41.986: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-a,UID:2dd8c0d3-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402314,Generation:0,CreationTimestamp:2020-05-19 12:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 12:13:41.986: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-a,UID:2dd8c0d3-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402314,Generation:0,CreationTimestamp:2020-05-19 12:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 19 12:13:51.994: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-a,UID:2dd8c0d3-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402334,Generation:0,CreationTimestamp:2020-05-19 12:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 19 12:13:51.994: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-a,UID:2dd8c0d3-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402334,Generation:0,CreationTimestamp:2020-05-19 12:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 19 12:14:02.002: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-a,UID:2dd8c0d3-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402354,Generation:0,CreationTimestamp:2020-05-19 12:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 12:14:02.002: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-a,UID:2dd8c0d3-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402354,Generation:0,CreationTimestamp:2020-05-19 12:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 19 12:14:12.008: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-a,UID:2dd8c0d3-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402374,Generation:0,CreationTimestamp:2020-05-19 12:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 12:14:12.008: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-a,UID:2dd8c0d3-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402374,Generation:0,CreationTimestamp:2020-05-19 12:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 19 12:14:22.015: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-b,UID:45b43ac6-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402394,Generation:0,CreationTimestamp:2020-05-19 12:14:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 12:14:22.015: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-b,UID:45b43ac6-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402394,Generation:0,CreationTimestamp:2020-05-19 12:14:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 19 12:14:32.021: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-b,UID:45b43ac6-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402414,Generation:0,CreationTimestamp:2020-05-19 12:14:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 12:14:32.021: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-jvt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-jvt6p/configmaps/e2e-watch-test-configmap-b,UID:45b43ac6-99ca-11ea-99e8-0242ac110002,ResourceVersion:11402414,Generation:0,CreationTimestamp:2020-05-19 12:14:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:14:42.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-jvt6p" for this suite. May 19 12:14:48.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:14:48.070: INFO: namespace: e2e-tests-watch-jvt6p, resource: bindings, ignored listing per whitelist May 19 12:14:48.114: INFO: namespace e2e-tests-watch-jvt6p deletion completed in 6.087451263s • [SLOW TEST:66.237 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:14:48.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 12:14:48.208: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5550ca30-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-qxwtk" to be "success or failure" May 19 12:14:48.221: INFO: Pod "downwardapi-volume-5550ca30-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.170124ms May 19 12:14:50.271: INFO: Pod "downwardapi-volume-5550ca30-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062473451s May 19 12:14:52.391: INFO: Pod "downwardapi-volume-5550ca30-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.182941308s STEP: Saw pod success May 19 12:14:52.391: INFO: Pod "downwardapi-volume-5550ca30-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:14:52.395: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5550ca30-99ca-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 12:14:52.421: INFO: Waiting for pod downwardapi-volume-5550ca30-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:14:52.423: INFO: Pod downwardapi-volume-5550ca30-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:14:52.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qxwtk" for this suite. May 19 12:14:58.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:14:58.488: INFO: namespace: e2e-tests-projected-qxwtk, resource: bindings, ignored listing per whitelist May 19 12:14:58.531: INFO: namespace e2e-tests-projected-qxwtk deletion completed in 6.105176605s • [SLOW TEST:10.417 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:14:58.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 12:14:58.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-rvhg5' May 19 12:15:01.238: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 12:15:01.238: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 19 12:15:01.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-rvhg5' May 19 12:15:01.433: INFO: stderr: "" May 19 12:15:01.433: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:15:01.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rvhg5" for this suite. May 19 12:15:23.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:15:23.525: INFO: namespace: e2e-tests-kubectl-rvhg5, resource: bindings, ignored listing per whitelist May 19 12:15:23.526: INFO: namespace e2e-tests-kubectl-rvhg5 deletion completed in 22.088279503s • [SLOW TEST:24.994 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:15:23.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-6a7323de-99ca-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 12:15:23.707: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a7581a6-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-ks6c2" to be "success or failure" May 19 12:15:23.722: INFO: Pod "pod-projected-configmaps-6a7581a6-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.675669ms May 19 12:15:25.726: INFO: Pod "pod-projected-configmaps-6a7581a6-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01877642s May 19 12:15:27.730: INFO: Pod "pod-projected-configmaps-6a7581a6-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022869027s STEP: Saw pod success May 19 12:15:27.730: INFO: Pod "pod-projected-configmaps-6a7581a6-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:15:27.732: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-6a7581a6-99ca-11ea-abcb-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 19 12:15:27.817: INFO: Waiting for pod pod-projected-configmaps-6a7581a6-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:15:27.831: INFO: Pod pod-projected-configmaps-6a7581a6-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:15:27.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ks6c2" for this suite. May 19 12:15:33.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:15:33.909: INFO: namespace: e2e-tests-projected-ks6c2, resource: bindings, ignored listing per whitelist May 19 12:15:33.913: INFO: namespace e2e-tests-projected-ks6c2 deletion completed in 6.079052131s • [SLOW TEST:10.387 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:15:33.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 19 12:15:45.465: INFO: 5 pods remaining May 19 12:15:45.465: INFO: 5 pods has nil DeletionTimestamp May 19 12:15:45.465: INFO: STEP: Gathering metrics W0519 12:15:50.251994 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 12:15:50.252: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:15:50.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-v4vw9" for this suite. May 19 12:16:00.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:16:00.320: INFO: namespace: e2e-tests-gc-v4vw9, resource: bindings, ignored listing per whitelist May 19 12:16:00.401: INFO: namespace e2e-tests-gc-v4vw9 deletion completed in 10.146708865s • [SLOW TEST:26.488 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:16:00.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-5dcf STEP: Creating a pod to test atomic-volume-subpath May 19 12:16:00.673: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5dcf" in namespace "e2e-tests-subpath-92zgv" to be "success or failure" May 19 12:16:00.676: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.64958ms May 19 12:16:02.962: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289298598s May 19 12:16:04.965: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292667809s May 19 12:16:06.969: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296664058s May 19 12:16:08.974: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Running", Reason="", readiness=false. Elapsed: 8.301046889s May 19 12:16:10.978: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Running", Reason="", readiness=false. Elapsed: 10.305847567s May 19 12:16:12.982: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Running", Reason="", readiness=false. Elapsed: 12.309666322s May 19 12:16:14.986: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Running", Reason="", readiness=false. Elapsed: 14.313508139s May 19 12:16:16.991: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Running", Reason="", readiness=false. Elapsed: 16.318210949s May 19 12:16:18.994: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Running", Reason="", readiness=false. Elapsed: 18.321730979s May 19 12:16:20.999: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Running", Reason="", readiness=false. Elapsed: 20.326202307s May 19 12:16:23.004: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Running", Reason="", readiness=false. Elapsed: 22.33120722s May 19 12:16:25.007: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Running", Reason="", readiness=false. Elapsed: 24.334246702s May 19 12:16:27.011: INFO: Pod "pod-subpath-test-secret-5dcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.33832508s STEP: Saw pod success May 19 12:16:27.011: INFO: Pod "pod-subpath-test-secret-5dcf" satisfied condition "success or failure" May 19 12:16:27.014: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-5dcf container test-container-subpath-secret-5dcf: STEP: delete the pod May 19 12:16:27.187: INFO: Waiting for pod pod-subpath-test-secret-5dcf to disappear May 19 12:16:27.298: INFO: Pod pod-subpath-test-secret-5dcf no longer exists STEP: Deleting pod pod-subpath-test-secret-5dcf May 19 12:16:27.298: INFO: Deleting pod "pod-subpath-test-secret-5dcf" in namespace "e2e-tests-subpath-92zgv" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:16:27.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-92zgv" for this suite. May 19 12:16:35.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:16:35.390: INFO: namespace: e2e-tests-subpath-92zgv, resource: bindings, ignored listing per whitelist May 19 12:16:35.413: INFO: namespace e2e-tests-subpath-92zgv deletion completed in 8.090262866s • [SLOW TEST:35.011 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:16:35.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 19 12:16:35.810: INFO: Waiting up to 5m0s for pod "var-expansion-9573aad8-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-var-expansion-nrr4h" to be "success or failure" May 19 12:16:35.883: INFO: Pod "var-expansion-9573aad8-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 73.049057ms May 19 12:16:38.051: INFO: Pod "var-expansion-9573aad8-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240552443s May 19 12:16:40.200: INFO: Pod "var-expansion-9573aad8-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.38953354s STEP: Saw pod success May 19 12:16:40.200: INFO: Pod "var-expansion-9573aad8-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:16:40.210: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-9573aad8-99ca-11ea-abcb-0242ac110018 container dapi-container: STEP: delete the pod May 19 12:16:40.739: INFO: Waiting for pod var-expansion-9573aad8-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:16:40.821: INFO: Pod var-expansion-9573aad8-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:16:40.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-nrr4h" for this suite. May 19 12:16:47.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:16:47.115: INFO: namespace: e2e-tests-var-expansion-nrr4h, resource: bindings, ignored listing per whitelist May 19 12:16:47.157: INFO: namespace e2e-tests-var-expansion-nrr4h deletion completed in 6.332325811s • [SLOW TEST:11.745 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:16:47.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 19 12:16:47.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 19 12:16:47.610: INFO: stderr: "" May 19 12:16:47.610: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:16:47.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-stg2j" for this suite. May 19 12:16:53.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:16:53.714: INFO: namespace: e2e-tests-kubectl-stg2j, resource: bindings, ignored listing per whitelist May 19 12:16:53.734: INFO: namespace e2e-tests-kubectl-stg2j deletion completed in 6.090647623s • [SLOW TEST:6.577 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:16:53.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 12:16:54.045: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a0431278-99ca-11ea-99e8-0242ac110002", Controller:(*bool)(0xc000e1f13a), BlockOwnerDeletion:(*bool)(0xc000e1f13b)}} May 19 12:16:54.064: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a04032fa-99ca-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00203f752), BlockOwnerDeletion:(*bool)(0xc00203f753)}} May 19 12:16:54.083: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a040c8ec-99ca-11ea-99e8-0242ac110002", Controller:(*bool)(0xc000b66212), BlockOwnerDeletion:(*bool)(0xc000b66213)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:16:59.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-76dr8" for this suite. May 19 12:17:05.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:17:05.181: INFO: namespace: e2e-tests-gc-76dr8, resource: bindings, ignored listing per whitelist May 19 12:17:05.222: INFO: namespace e2e-tests-gc-76dr8 deletion completed in 6.098121897s • [SLOW TEST:11.487 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:17:05.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 12:17:05.511: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a71c61c2-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-8q6tk" to be "success or failure" May 19 12:17:05.535: INFO: Pod "downwardapi-volume-a71c61c2-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.659538ms May 19 12:17:07.656: INFO: Pod "downwardapi-volume-a71c61c2-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145170187s May 19 12:17:09.660: INFO: Pod "downwardapi-volume-a71c61c2-99ca-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.149463593s May 19 12:17:11.664: INFO: Pod "downwardapi-volume-a71c61c2-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153433969s STEP: Saw pod success May 19 12:17:11.664: INFO: Pod "downwardapi-volume-a71c61c2-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:17:11.667: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a71c61c2-99ca-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 12:17:11.758: INFO: Waiting for pod downwardapi-volume-a71c61c2-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:17:11.774: INFO: Pod downwardapi-volume-a71c61c2-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:17:11.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8q6tk" for this suite. May 19 12:17:17.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:17:17.931: INFO: namespace: e2e-tests-downward-api-8q6tk, resource: bindings, ignored listing per whitelist May 19 12:17:17.935: INFO: namespace e2e-tests-downward-api-8q6tk deletion completed in 6.12372227s • [SLOW TEST:12.713 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:17:17.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 19 12:17:18.090: INFO: Waiting up to 5m0s for pod "client-containers-aea57dad-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-containers-mmjjw" to be "success or failure" May 19 12:17:18.096: INFO: Pod "client-containers-aea57dad-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.69823ms May 19 12:17:20.442: INFO: Pod "client-containers-aea57dad-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351547135s May 19 12:17:22.446: INFO: Pod "client-containers-aea57dad-99ca-11ea-abcb-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.355771836s May 19 12:17:24.450: INFO: Pod "client-containers-aea57dad-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.360288709s STEP: Saw pod success May 19 12:17:24.450: INFO: Pod "client-containers-aea57dad-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:17:24.453: INFO: Trying to get logs from node hunter-worker2 pod client-containers-aea57dad-99ca-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 12:17:24.479: INFO: Waiting for pod client-containers-aea57dad-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:17:24.485: INFO: Pod client-containers-aea57dad-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:17:24.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-mmjjw" for this suite. May 19 12:17:30.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:17:30.595: INFO: namespace: e2e-tests-containers-mmjjw, resource: bindings, ignored listing per whitelist May 19 12:17:30.619: INFO: namespace e2e-tests-containers-mmjjw deletion completed in 6.130316551s • [SLOW TEST:12.684 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:17:30.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-b6461dcd-99ca-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 12:17:30.958: INFO: Waiting up to 5m0s for pod "pod-secrets-b646aa71-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-secrets-4wbgm" to be "success or failure" May 19 12:17:30.977: INFO: Pod "pod-secrets-b646aa71-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.627241ms May 19 12:17:33.040: INFO: Pod "pod-secrets-b646aa71-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081298438s May 19 12:17:35.043: INFO: Pod "pod-secrets-b646aa71-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084895481s STEP: Saw pod success May 19 12:17:35.043: INFO: Pod "pod-secrets-b646aa71-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:17:35.046: INFO: Trying to get logs from node hunter-worker pod pod-secrets-b646aa71-99ca-11ea-abcb-0242ac110018 container secret-volume-test: STEP: delete the pod May 19 12:17:35.105: INFO: Waiting for pod pod-secrets-b646aa71-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:17:35.207: INFO: Pod pod-secrets-b646aa71-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:17:35.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4wbgm" for this suite. May 19 12:17:41.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:17:41.254: INFO: namespace: e2e-tests-secrets-4wbgm, resource: bindings, ignored listing per whitelist May 19 12:17:41.300: INFO: namespace e2e-tests-secrets-4wbgm deletion completed in 6.087312127s • [SLOW TEST:10.681 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:17:41.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 19 12:17:41.468: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hq49t,SelfLink:/api/v1/namespaces/e2e-tests-watch-hq49t/configmaps/e2e-watch-test-resource-version,UID:bc8b15fd-99ca-11ea-99e8-0242ac110002,ResourceVersion:11403246,Generation:0,CreationTimestamp:2020-05-19 12:17:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 12:17:41.468: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hq49t,SelfLink:/api/v1/namespaces/e2e-tests-watch-hq49t/configmaps/e2e-watch-test-resource-version,UID:bc8b15fd-99ca-11ea-99e8-0242ac110002,ResourceVersion:11403247,Generation:0,CreationTimestamp:2020-05-19 12:17:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:17:41.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-hq49t" for this suite. May 19 12:17:47.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:17:47.525: INFO: namespace: e2e-tests-watch-hq49t, resource: bindings, ignored listing per whitelist May 19 12:17:47.548: INFO: namespace e2e-tests-watch-hq49t deletion completed in 6.076534849s • [SLOW TEST:6.248 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:17:47.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-v5hbz in namespace e2e-tests-proxy-wdhdf I0519 12:17:47.690495 6 runners.go:184] Created replication controller with name: proxy-service-v5hbz, namespace: e2e-tests-proxy-wdhdf, replica count: 1 I0519 12:17:48.740868 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 12:17:49.741055 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 12:17:50.741423 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 12:17:51.741623 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 12:17:52.741821 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 12:17:53.741994 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 12:17:54.742187 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 12:17:55.742345 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 12:17:56.742485 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 12:17:57.742671 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 12:17:58.742865 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 12:17:59.743071 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 12:18:00.743249 6 runners.go:184] proxy-service-v5hbz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 12:18:00.746: INFO: setup took 13.089683681s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 19 12:18:00.751: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wdhdf/pods/proxy-service-v5hbz-cgprs:160/proxy/: foo (200; 4.826637ms) May 19 12:18:00.751: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wdhdf/pods/http:proxy-service-v5hbz-cgprs:160/proxy/: foo (200; 4.9504ms) May 19 12:18:00.751: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wdhdf/pods/http:proxy-service-v5hbz-cgprs:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 19 12:18:18.423: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:18.427: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:20.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:20.430: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:22.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:22.430: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:24.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:24.496: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:26.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:26.508: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:28.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:28.431: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:30.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:30.431: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:32.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:32.432: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:34.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:34.430: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:36.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:36.431: INFO: Pod pod-with-poststart-exec-hook still exists May 19 12:18:38.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 12:18:38.431: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:18:38.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ls5xz" for this suite. May 19 12:19:00.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:19:00.557: INFO: namespace: e2e-tests-container-lifecycle-hook-ls5xz, resource: bindings, ignored listing per whitelist May 19 12:19:00.591: INFO: namespace e2e-tests-container-lifecycle-hook-ls5xz deletion completed in 22.155934456s • [SLOW TEST:50.292 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:19:00.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ec18597e-99ca-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 12:19:01.397: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-k86pk" to be "success or failure" May 19 12:19:01.411: INFO: Pod "pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.879253ms May 19 12:19:04.256: INFO: Pod "pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.859572404s May 19 12:19:06.260: INFO: Pod "pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.863000163s May 19 12:19:08.264: INFO: Pod "pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.867005634s May 19 12:19:10.280: INFO: Pod "pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.882844154s May 19 12:19:12.284: INFO: Pod "pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.886710048s STEP: Saw pod success May 19 12:19:12.284: INFO: Pod "pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:19:12.287: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 19 12:19:13.787: INFO: Waiting for pod pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:19:14.119: INFO: Pod pod-projected-configmaps-ec1f3be4-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:19:14.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k86pk" for this suite. May 19 12:19:24.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:19:25.523: INFO: namespace: e2e-tests-projected-k86pk, resource: bindings, ignored listing per whitelist May 19 12:19:25.570: INFO: namespace e2e-tests-projected-k86pk deletion completed in 11.446211464s • [SLOW TEST:24.978 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:19:25.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 12:19:26.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-m86rv" to be "success or failure" May 19 12:19:26.311: INFO: Pod "downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 244.750681ms May 19 12:19:28.622: INFO: Pod "downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.555268075s May 19 12:19:30.626: INFO: Pod "downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.559943372s May 19 12:19:32.629: INFO: Pod "downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562658906s May 19 12:19:34.658: INFO: Pod "downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.591585923s STEP: Saw pod success May 19 12:19:34.658: INFO: Pod "downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:19:34.661: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 12:19:35.137: INFO: Waiting for pod downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018 to disappear May 19 12:19:35.484: INFO: Pod downwardapi-volume-faeea6ca-99ca-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:19:35.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m86rv" for this suite. May 19 12:19:43.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:19:43.798: INFO: namespace: e2e-tests-projected-m86rv, resource: bindings, ignored listing per whitelist May 19 12:19:43.821: INFO: namespace e2e-tests-projected-m86rv deletion completed in 8.333611745s • [SLOW TEST:18.251 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:19:43.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 19 12:19:44.570: INFO: Waiting up to 5m0s for pod "var-expansion-05ede060-99cb-11ea-abcb-0242ac110018" in namespace "e2e-tests-var-expansion-p7kgp" to be "success or failure" May 19 12:19:44.599: INFO: Pod "var-expansion-05ede060-99cb-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.374469ms May 19 12:19:46.778: INFO: Pod "var-expansion-05ede060-99cb-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20787921s May 19 12:19:48.995: INFO: Pod "var-expansion-05ede060-99cb-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424358986s May 19 12:19:51.437: INFO: Pod "var-expansion-05ede060-99cb-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.866372849s May 19 12:19:53.587: INFO: Pod "var-expansion-05ede060-99cb-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.017211899s May 19 12:19:55.706: INFO: Pod "var-expansion-05ede060-99cb-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.135895273s STEP: Saw pod success May 19 12:19:55.706: INFO: Pod "var-expansion-05ede060-99cb-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:19:55.708: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-05ede060-99cb-11ea-abcb-0242ac110018 container dapi-container: STEP: delete the pod May 19 12:19:56.269: INFO: Waiting for pod var-expansion-05ede060-99cb-11ea-abcb-0242ac110018 to disappear May 19 12:19:56.323: INFO: Pod var-expansion-05ede060-99cb-11ea-abcb-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:19:56.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-p7kgp" for this suite. May 19 12:20:08.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:20:08.950: INFO: namespace: e2e-tests-var-expansion-p7kgp, resource: bindings, ignored listing per whitelist May 19 12:20:09.212: INFO: namespace e2e-tests-var-expansion-p7kgp deletion completed in 12.714557931s • [SLOW TEST:25.391 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:20:09.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 12:20:10.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-86jzp' May 19 12:20:10.924: INFO: stderr: "" May 19 12:20:10.924: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 19 12:20:20.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-86jzp -o json' May 19 12:20:21.066: INFO: stderr: "" May 19 12:20:21.066: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-19T12:20:10Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-86jzp\",\n \"resourceVersion\": \"11403715\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-86jzp/pods/e2e-test-nginx-pod\",\n \"uid\": \"15a9d46d-99cb-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-mwjw8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-mwjw8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-mwjw8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T12:20:11Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T12:20:18Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T12:20:18Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T12:20:10Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://7a5c2eaa82292127ce68f1baa6b9e044f6be1ecc6484df07f0665aab07cbaa9b\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-19T12:20:17Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.32\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-19T12:20:11Z\"\n }\n}\n" STEP: replace the image in the pod May 19 12:20:21.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-86jzp' May 19 12:20:21.733: INFO: stderr: "" May 19 12:20:21.733: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 19 12:20:21.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-86jzp' May 19 12:20:32.826: INFO: stderr: "" May 19 12:20:32.826: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:20:32.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-86jzp" for this suite. May 19 12:20:38.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:20:38.892: INFO: namespace: e2e-tests-kubectl-86jzp, resource: bindings, ignored listing per whitelist May 19 12:20:38.966: INFO: namespace e2e-tests-kubectl-86jzp deletion completed in 6.137526494s • [SLOW TEST:29.754 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:20:38.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jfdtt May 19 12:20:43.128: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jfdtt STEP: checking the pod's current state and verifying that restartCount is present May 19 12:20:43.131: INFO: Initial restart count of pod liveness-http is 0 May 19 12:21:05.223: INFO: Restart count of pod e2e-tests-container-probe-jfdtt/liveness-http is now 1 (22.091327804s elapsed) May 19 12:21:25.426: INFO: Restart count of pod e2e-tests-container-probe-jfdtt/liveness-http is now 2 (42.295117826s elapsed) May 19 12:21:45.603: INFO: Restart count of pod e2e-tests-container-probe-jfdtt/liveness-http is now 3 (1m2.471931705s elapsed) May 19 12:22:05.642: INFO: Restart count of pod e2e-tests-container-probe-jfdtt/liveness-http is now 4 (1m22.510570645s elapsed) May 19 12:23:11.873: INFO: Restart count of pod e2e-tests-container-probe-jfdtt/liveness-http is now 5 (2m28.742153898s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:23:11.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jfdtt" for this suite. May 19 12:23:17.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:23:18.003: INFO: namespace: e2e-tests-container-probe-jfdtt, resource: bindings, ignored listing per whitelist May 19 12:23:18.040: INFO: namespace e2e-tests-container-probe-jfdtt deletion completed in 6.104636774s • [SLOW TEST:159.073 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:23:18.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:23:18.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tmcz6" for this suite. May 19 12:23:24.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:23:24.262: INFO: namespace: e2e-tests-kubelet-test-tmcz6, resource: bindings, ignored listing per whitelist May 19 12:23:24.277: INFO: namespace e2e-tests-kubelet-test-tmcz6 deletion completed in 6.068302844s • [SLOW TEST:6.237 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:23:24.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 12:23:24.369: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.948521ms) May 19 12:23:24.373: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.751067ms) May 19 12:23:24.376: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.079699ms) May 19 12:23:24.379: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.57605ms) May 19 12:23:24.381: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.44631ms) May 19 12:23:24.384: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.39858ms) May 19 12:23:24.386: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.511303ms) May 19 12:23:24.389: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.144108ms) May 19 12:23:24.391: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.180559ms) May 19 12:23:24.393: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.372549ms) May 19 12:23:24.396: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.248361ms) May 19 12:23:24.398: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.007935ms) May 19 12:23:24.399: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.791684ms) May 19 12:23:24.402: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.041448ms) May 19 12:23:24.404: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.124347ms) May 19 12:23:24.428: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 24.559364ms) May 19 12:23:24.432: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.292492ms) May 19 12:23:24.435: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.247789ms) May 19 12:23:24.438: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.462287ms) May 19 12:23:24.441: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.727271ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:23:24.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-mhns6" for this suite. May 19 12:23:30.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:23:30.484: INFO: namespace: e2e-tests-proxy-mhns6, resource: bindings, ignored listing per whitelist May 19 12:23:30.520: INFO: namespace e2e-tests-proxy-mhns6 deletion completed in 6.075897437s • [SLOW TEST:6.242 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:23:30.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-kw5qt May 19 12:23:34.605: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-kw5qt STEP: checking the pod's current state and verifying that restartCount is present May 19 12:23:34.608: INFO: Initial restart count of pod liveness-exec is 0 May 19 12:24:28.920: INFO: Restart count of pod e2e-tests-container-probe-kw5qt/liveness-exec is now 1 (54.311649836s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:24:28.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-kw5qt" for this suite. May 19 12:24:35.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:24:35.176: INFO: namespace: e2e-tests-container-probe-kw5qt, resource: bindings, ignored listing per whitelist May 19 12:24:35.191: INFO: namespace e2e-tests-container-probe-kw5qt deletion completed in 6.073327216s • [SLOW TEST:64.671 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:24:35.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 19 12:24:35.310: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pwknh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pwknh/configmaps/e2e-watch-test-watch-closed,UID:b340ccb3-99cb-11ea-99e8-0242ac110002,ResourceVersion:11404337,Generation:0,CreationTimestamp:2020-05-19 12:24:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 12:24:35.310: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pwknh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pwknh/configmaps/e2e-watch-test-watch-closed,UID:b340ccb3-99cb-11ea-99e8-0242ac110002,ResourceVersion:11404338,Generation:0,CreationTimestamp:2020-05-19 12:24:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 19 12:24:35.347: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pwknh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pwknh/configmaps/e2e-watch-test-watch-closed,UID:b340ccb3-99cb-11ea-99e8-0242ac110002,ResourceVersion:11404339,Generation:0,CreationTimestamp:2020-05-19 12:24:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 12:24:35.347: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pwknh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pwknh/configmaps/e2e-watch-test-watch-closed,UID:b340ccb3-99cb-11ea-99e8-0242ac110002,ResourceVersion:11404340,Generation:0,CreationTimestamp:2020-05-19 12:24:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:24:35.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-pwknh" for this suite. May 19 12:24:41.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:24:41.417: INFO: namespace: e2e-tests-watch-pwknh, resource: bindings, ignored listing per whitelist May 19 12:24:41.431: INFO: namespace e2e-tests-watch-pwknh deletion completed in 6.080943786s • [SLOW TEST:6.240 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:24:41.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-b6fb685d-99cb-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 12:24:41.588: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b6fd8d80-99cb-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-g2rx4" to be "success or failure" May 19 12:24:41.601: INFO: Pod "pod-projected-configmaps-b6fd8d80-99cb-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.472371ms May 19 12:24:43.830: INFO: Pod "pod-projected-configmaps-b6fd8d80-99cb-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242441792s May 19 12:24:45.834: INFO: Pod "pod-projected-configmaps-b6fd8d80-99cb-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.246153979s STEP: Saw pod success May 19 12:24:45.834: INFO: Pod "pod-projected-configmaps-b6fd8d80-99cb-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:24:45.836: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-b6fd8d80-99cb-11ea-abcb-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 19 12:24:45.878: INFO: Waiting for pod pod-projected-configmaps-b6fd8d80-99cb-11ea-abcb-0242ac110018 to disappear May 19 12:24:45.880: INFO: Pod pod-projected-configmaps-b6fd8d80-99cb-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:24:45.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g2rx4" for this suite. May 19 12:24:51.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:24:51.961: INFO: namespace: e2e-tests-projected-g2rx4, resource: bindings, ignored listing per whitelist May 19 12:24:51.967: INFO: namespace e2e-tests-projected-g2rx4 deletion completed in 6.083834007s • [SLOW TEST:10.536 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:24:51.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-bd4be00f-99cb-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 12:24:52.183: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bd4dee85-99cb-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-wdq7s" to be "success or failure" May 19 12:24:52.237: INFO: Pod "pod-projected-configmaps-bd4dee85-99cb-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 54.595817ms May 19 12:24:54.310: INFO: Pod "pod-projected-configmaps-bd4dee85-99cb-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127121082s May 19 12:24:56.314: INFO: Pod "pod-projected-configmaps-bd4dee85-99cb-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131471373s STEP: Saw pod success May 19 12:24:56.314: INFO: Pod "pod-projected-configmaps-bd4dee85-99cb-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:24:56.318: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-bd4dee85-99cb-11ea-abcb-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 19 12:24:56.341: INFO: Waiting for pod pod-projected-configmaps-bd4dee85-99cb-11ea-abcb-0242ac110018 to disappear May 19 12:24:56.356: INFO: Pod pod-projected-configmaps-bd4dee85-99cb-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:24:56.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wdq7s" for this suite. May 19 12:25:02.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:25:02.387: INFO: namespace: e2e-tests-projected-wdq7s, resource: bindings, ignored listing per whitelist May 19 12:25:02.435: INFO: namespace e2e-tests-projected-wdq7s deletion completed in 6.076408624s • [SLOW TEST:10.467 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:25:02.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-sldgk May 19 12:25:08.563: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-sldgk STEP: checking the pod's current state and verifying that restartCount is present May 19 12:25:08.564: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:29:09.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-sldgk" for this suite. May 19 12:29:15.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:29:15.662: INFO: namespace: e2e-tests-container-probe-sldgk, resource: bindings, ignored listing per whitelist May 19 12:29:15.681: INFO: namespace e2e-tests-container-probe-sldgk deletion completed in 6.073284908s • [SLOW TEST:253.245 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:29:15.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 12:29:15.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-jjjcb' May 19 12:29:18.261: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 12:29:18.261: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 19 12:29:20.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-jjjcb' May 19 12:29:20.808: INFO: stderr: "" May 19 12:29:20.808: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:29:20.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jjjcb" for this suite. May 19 12:29:43.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:29:43.163: INFO: namespace: e2e-tests-kubectl-jjjcb, resource: bindings, ignored listing per whitelist May 19 12:29:43.212: INFO: namespace e2e-tests-kubectl-jjjcb deletion completed in 22.23316115s • [SLOW TEST:27.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:29:43.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 12:29:43.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 19 12:29:43.396: INFO: stderr: "" May 19 12:29:43.397: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 19 12:29:43.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-m4684' May 19 12:29:43.671: INFO: stderr: "" May 19 12:29:43.671: INFO: stdout: "replicationcontroller/redis-master created\n" May 19 12:29:43.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-m4684' May 19 12:29:44.012: INFO: stderr: "" May 19 12:29:44.012: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 19 12:29:45.014: INFO: Selector matched 1 pods for map[app:redis] May 19 12:29:45.015: INFO: Found 0 / 1 May 19 12:29:46.064: INFO: Selector matched 1 pods for map[app:redis] May 19 12:29:46.064: INFO: Found 0 / 1 May 19 12:29:47.016: INFO: Selector matched 1 pods for map[app:redis] May 19 12:29:47.016: INFO: Found 0 / 1 May 19 12:29:48.016: INFO: Selector matched 1 pods for map[app:redis] May 19 12:29:48.016: INFO: Found 1 / 1 May 19 12:29:48.016: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 12:29:48.019: INFO: Selector matched 1 pods for map[app:redis] May 19 12:29:48.019: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 12:29:48.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-kjmrn --namespace=e2e-tests-kubectl-m4684' May 19 12:29:48.125: INFO: stderr: "" May 19 12:29:48.125: INFO: stdout: "Name: redis-master-kjmrn\nNamespace: e2e-tests-kubectl-m4684\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Tue, 19 May 2020 12:29:43 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.20\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://ae4caf0502a0789ded890d664028da10cacc0d8e7ce5c2418fafd1a15eef4bb3\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 19 May 2020 12:29:47 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-vvcbg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-vvcbg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-vvcbg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned e2e-tests-kubectl-m4684/redis-master-kjmrn to hunter-worker\n Normal Pulled 3s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" May 19 12:29:48.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-m4684' May 19 12:29:48.267: INFO: stderr: "" May 19 12:29:48.267: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-m4684\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-kjmrn\n" May 19 12:29:48.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-m4684' May 19 12:29:48.368: INFO: stderr: "" May 19 12:29:48.369: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-m4684\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.11.127\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.20:6379\nSession Affinity: None\nEvents: \n" May 19 12:29:48.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 19 12:29:48.506: INFO: stderr: "" May 19 12:29:48.506: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 19 May 2020 12:29:46 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 19 May 2020 12:29:46 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 19 May 2020 12:29:46 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 19 May 2020 12:29:46 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 64d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 64d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 64d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 64d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 64d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 64d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 64d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 19 12:29:48.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-m4684' May 19 12:29:48.620: INFO: stderr: "" May 19 12:29:48.620: INFO: stdout: "Name: e2e-tests-kubectl-m4684\nLabels: e2e-framework=kubectl\n e2e-run=0748d997-99be-11ea-abcb-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:29:48.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m4684" for this suite. May 19 12:30:12.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:30:12.679: INFO: namespace: e2e-tests-kubectl-m4684, resource: bindings, ignored listing per whitelist May 19 12:30:12.744: INFO: namespace e2e-tests-kubectl-m4684 deletion completed in 24.120622477s • [SLOW TEST:29.532 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:30:12.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-7c790cc0-99cc-11ea-abcb-0242ac110018 May 19 12:30:12.950: INFO: Pod name my-hostname-basic-7c790cc0-99cc-11ea-abcb-0242ac110018: Found 0 pods out of 1 May 19 12:30:17.955: INFO: Pod name my-hostname-basic-7c790cc0-99cc-11ea-abcb-0242ac110018: Found 1 pods out of 1 May 19 12:30:17.955: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7c790cc0-99cc-11ea-abcb-0242ac110018" are running May 19 12:30:17.958: INFO: Pod "my-hostname-basic-7c790cc0-99cc-11ea-abcb-0242ac110018-wcq7w" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 12:30:13 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 12:30:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 12:30:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 12:30:12 +0000 UTC Reason: Message:}]) May 19 12:30:17.958: INFO: Trying to dial the pod May 19 12:30:22.970: INFO: Controller my-hostname-basic-7c790cc0-99cc-11ea-abcb-0242ac110018: Got expected result from replica 1 [my-hostname-basic-7c790cc0-99cc-11ea-abcb-0242ac110018-wcq7w]: "my-hostname-basic-7c790cc0-99cc-11ea-abcb-0242ac110018-wcq7w", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:30:22.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-g4p2g" for this suite. May 19 12:30:28.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:30:29.005: INFO: namespace: e2e-tests-replication-controller-g4p2g, resource: bindings, ignored listing per whitelist May 19 12:30:29.070: INFO: namespace e2e-tests-replication-controller-g4p2g deletion completed in 6.096155005s • [SLOW TEST:16.325 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:30:29.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 19 12:30:29.155: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 19 12:30:29.189: INFO: Pod name sample-pod: Found 0 pods out of 1 May 19 12:30:34.194: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 12:30:34.194: INFO: Creating deployment "test-rolling-update-deployment" May 19 12:30:34.199: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 19 12:30:34.276: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 19 12:30:36.570: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 19 12:30:36.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725488234, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725488234, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725488234, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725488234, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 12:30:38.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725488234, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725488234, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725488234, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725488234, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 12:30:40.578: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 19 12:30:40.587: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-2rttp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2rttp/deployments/test-rolling-update-deployment,UID:892bf335-99cc-11ea-99e8-0242ac110002,ResourceVersion:11405260,Generation:1,CreationTimestamp:2020-05-19 12:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-19 12:30:34 +0000 UTC 2020-05-19 12:30:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-19 12:30:39 +0000 UTC 2020-05-19 12:30:34 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 19 12:30:40.590: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-2rttp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2rttp/replicasets/test-rolling-update-deployment-75db98fb4c,UID:89391eae-99cc-11ea-99e8-0242ac110002,ResourceVersion:11405251,Generation:1,CreationTimestamp:2020-05-19 12:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 892bf335-99cc-11ea-99e8-0242ac110002 0xc001b1bbf7 0xc001b1bbf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 19 12:30:40.590: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 19 12:30:40.590: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-2rttp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2rttp/replicasets/test-rolling-update-controller,UID:862b187a-99cc-11ea-99e8-0242ac110002,ResourceVersion:11405259,Generation:2,CreationTimestamp:2020-05-19 12:30:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 892bf335-99cc-11ea-99e8-0242ac110002 0xc001b1bab7 0xc001b1bab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 12:30:40.592: INFO: Pod "test-rolling-update-deployment-75db98fb4c-bnt28" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-bnt28,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-2rttp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2rttp/pods/test-rolling-update-deployment-75db98fb4c-bnt28,UID:8939bf70-99cc-11ea-99e8-0242ac110002,ResourceVersion:11405250,Generation:0,CreationTimestamp:2020-05-19 12:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 89391eae-99cc-11ea-99e8-0242ac110002 0xc000803b27 0xc000803b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-txqmm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-txqmm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-txqmm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ae6040} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ae6060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 12:30:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 12:30:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 12:30:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 12:30:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.37,StartTime:2020-05-19 12:30:34 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-19 12:30:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://be2ef3ae78c82d1e0f12e251a97257e708b9df25c082b7c0dbe358838af87773}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:30:40.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-2rttp" for this suite. May 19 12:30:46.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:30:46.695: INFO: namespace: e2e-tests-deployment-2rttp, resource: bindings, ignored listing per whitelist May 19 12:30:46.725: INFO: namespace e2e-tests-deployment-2rttp deletion completed in 6.129516889s • [SLOW TEST:17.655 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:30:46.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 19 12:30:53.893: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:30:54.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-gvbf5" for this suite. May 19 12:31:16.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:31:16.986: INFO: namespace: e2e-tests-replicaset-gvbf5, resource: bindings, ignored listing per whitelist May 19 12:31:17.007: INFO: namespace e2e-tests-replicaset-gvbf5 deletion completed in 22.080535151s • [SLOW TEST:30.282 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:31:17.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a2beee50-99cc-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume configMaps May 19 12:31:17.126: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2c0688d-99cc-11ea-abcb-0242ac110018" in namespace "e2e-tests-configmap-tjg96" to be "success or failure" May 19 12:31:17.130: INFO: Pod "pod-configmaps-a2c0688d-99cc-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.354616ms May 19 12:31:19.166: INFO: Pod "pod-configmaps-a2c0688d-99cc-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039313141s May 19 12:31:21.170: INFO: Pod "pod-configmaps-a2c0688d-99cc-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043645058s STEP: Saw pod success May 19 12:31:21.170: INFO: Pod "pod-configmaps-a2c0688d-99cc-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:31:21.174: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-a2c0688d-99cc-11ea-abcb-0242ac110018 container configmap-volume-test: STEP: delete the pod May 19 12:31:21.239: INFO: Waiting for pod pod-configmaps-a2c0688d-99cc-11ea-abcb-0242ac110018 to disappear May 19 12:31:21.251: INFO: Pod pod-configmaps-a2c0688d-99cc-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:31:21.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tjg96" for this suite. May 19 12:31:27.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:31:27.318: INFO: namespace: e2e-tests-configmap-tjg96, resource: bindings, ignored listing per whitelist May 19 12:31:27.363: INFO: namespace e2e-tests-configmap-tjg96 deletion completed in 6.108246472s • [SLOW TEST:10.355 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:31:27.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:31:35.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-n9ltb" for this suite. May 19 12:31:41.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:31:41.592: INFO: namespace: e2e-tests-kubelet-test-n9ltb, resource: bindings, ignored listing per whitelist May 19 12:31:41.694: INFO: namespace e2e-tests-kubelet-test-n9ltb deletion completed in 6.12623733s • [SLOW TEST:14.331 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:31:41.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 19 12:31:41.850: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 12:31:41.896: INFO: Waiting for terminating namespaces to be deleted... May 19 12:31:41.899: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 19 12:31:41.907: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 19 12:31:41.907: INFO: Container coredns ready: true, restart count 0 May 19 12:31:41.907: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 19 12:31:41.907: INFO: Container kube-proxy ready: true, restart count 0 May 19 12:31:41.907: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 19 12:31:41.907: INFO: Container kindnet-cni ready: true, restart count 0 May 19 12:31:41.907: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 19 12:31:41.912: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 19 12:31:41.912: INFO: Container kindnet-cni ready: true, restart count 0 May 19 12:31:41.912: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 19 12:31:41.912: INFO: Container coredns ready: true, restart count 0 May 19 12:31:41.912: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 19 12:31:41.912: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b3f1282f-99cc-11ea-abcb-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b3f1282f-99cc-11ea-abcb-0242ac110018 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b3f1282f-99cc-11ea-abcb-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:31:50.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-wnzlw" for this suite. May 19 12:32:04.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:32:04.102: INFO: namespace: e2e-tests-sched-pred-wnzlw, resource: bindings, ignored listing per whitelist May 19 12:32:04.129: INFO: namespace e2e-tests-sched-pred-wnzlw deletion completed in 14.089573292s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.435 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:32:04.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 19 12:32:04.314: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 12:32:04.336: INFO: Number of nodes with available pods: 0 May 19 12:32:04.336: INFO: Node hunter-worker is running more than one daemon pod May 19 12:32:05.347: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 12:32:05.351: INFO: Number of nodes with available pods: 0 May 19 12:32:05.351: INFO: Node hunter-worker is running more than one daemon pod May 19 12:32:06.710: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 12:32:06.933: INFO: Number of nodes with available pods: 0 May 19 12:32:06.933: INFO: Node hunter-worker is running more than one daemon pod May 19 12:32:07.359: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 12:32:07.401: INFO: Number of nodes with available pods: 0 May 19 12:32:07.401: INFO: Node hunter-worker is running more than one daemon pod May 19 12:32:08.341: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 12:32:08.345: INFO: Number of nodes with available pods: 0 May 19 12:32:08.345: INFO: Node hunter-worker is running more than one daemon pod May 19 12:32:09.339: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 12:32:09.343: INFO: Number of nodes with available pods: 1 May 19 12:32:09.344: INFO: Node hunter-worker2 is running more than one daemon pod May 19 12:32:10.340: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 12:32:10.344: INFO: Number of nodes with available pods: 2 May 19 12:32:10.344: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 19 12:32:10.370: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 12:32:10.395: INFO: Number of nodes with available pods: 2 May 19 12:32:10.395: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-d4rfj, will wait for the garbage collector to delete the pods May 19 12:32:11.608: INFO: Deleting DaemonSet.extensions daemon-set took: 7.120076ms May 19 12:32:11.808: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.233045ms May 19 12:32:21.821: INFO: Number of nodes with available pods: 0 May 19 12:32:21.821: INFO: Number of running nodes: 0, number of available pods: 0 May 19 12:32:21.827: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-d4rfj/daemonsets","resourceVersion":"11405685"},"items":null} May 19 12:32:21.829: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-d4rfj/pods","resourceVersion":"11405685"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:32:21.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-d4rfj" for this suite. May 19 12:32:27.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:32:27.947: INFO: namespace: e2e-tests-daemonsets-d4rfj, resource: bindings, ignored listing per whitelist May 19 12:32:27.955: INFO: namespace e2e-tests-daemonsets-d4rfj deletion completed in 6.113631516s • [SLOW TEST:23.826 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:32:27.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 19 12:32:28.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x95k8' May 19 12:32:28.333: INFO: stderr: "" May 19 12:32:28.333: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 19 12:32:29.337: INFO: Selector matched 1 pods for map[app:redis] May 19 12:32:29.337: INFO: Found 0 / 1 May 19 12:32:30.339: INFO: Selector matched 1 pods for map[app:redis] May 19 12:32:30.339: INFO: Found 0 / 1 May 19 12:32:31.339: INFO: Selector matched 1 pods for map[app:redis] May 19 12:32:31.339: INFO: Found 0 / 1 May 19 12:32:32.337: INFO: Selector matched 1 pods for map[app:redis] May 19 12:32:32.338: INFO: Found 0 / 1 May 19 12:32:33.338: INFO: Selector matched 1 pods for map[app:redis] May 19 12:32:33.338: INFO: Found 1 / 1 May 19 12:32:33.338: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 19 12:32:33.341: INFO: Selector matched 1 pods for map[app:redis] May 19 12:32:33.341: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 12:32:33.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-7h7qt --namespace=e2e-tests-kubectl-x95k8 -p {"metadata":{"annotations":{"x":"y"}}}' May 19 12:32:33.450: INFO: stderr: "" May 19 12:32:33.450: INFO: stdout: "pod/redis-master-7h7qt patched\n" STEP: checking annotations May 19 12:32:33.455: INFO: Selector matched 1 pods for map[app:redis] May 19 12:32:33.455: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:32:33.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-x95k8" for this suite. May 19 12:32:55.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:32:55.503: INFO: namespace: e2e-tests-kubectl-x95k8, resource: bindings, ignored listing per whitelist May 19 12:32:55.543: INFO: namespace e2e-tests-kubectl-x95k8 deletion completed in 22.085491041s • [SLOW TEST:27.587 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:32:55.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 12:32:55.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd798a2b-99cc-11ea-abcb-0242ac110018" in namespace "e2e-tests-projected-wr7mp" to be "success or failure" May 19 12:32:55.667: INFO: Pod "downwardapi-volume-dd798a2b-99cc-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.757553ms May 19 12:32:57.910: INFO: Pod "downwardapi-volume-dd798a2b-99cc-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265383829s May 19 12:32:59.914: INFO: Pod "downwardapi-volume-dd798a2b-99cc-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.26990494s STEP: Saw pod success May 19 12:32:59.914: INFO: Pod "downwardapi-volume-dd798a2b-99cc-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:32:59.917: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-dd798a2b-99cc-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 12:33:00.265: INFO: Waiting for pod downwardapi-volume-dd798a2b-99cc-11ea-abcb-0242ac110018 to disappear May 19 12:33:00.302: INFO: Pod downwardapi-volume-dd798a2b-99cc-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:33:00.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wr7mp" for this suite. May 19 12:33:06.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:33:06.441: INFO: namespace: e2e-tests-projected-wr7mp, resource: bindings, ignored listing per whitelist May 19 12:33:06.454: INFO: namespace e2e-tests-projected-wr7mp deletion completed in 6.134036064s • [SLOW TEST:10.911 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:33:06.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 19 12:33:06.732: INFO: Waiting up to 5m0s for pod "pod-e4162672-99cc-11ea-abcb-0242ac110018" in namespace "e2e-tests-emptydir-d4zf8" to be "success or failure" May 19 12:33:06.761: INFO: Pod "pod-e4162672-99cc-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.412904ms May 19 12:33:08.783: INFO: Pod "pod-e4162672-99cc-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051421847s May 19 12:33:10.820: INFO: Pod "pod-e4162672-99cc-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087774566s May 19 12:33:12.843: INFO: Pod "pod-e4162672-99cc-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111339133s STEP: Saw pod success May 19 12:33:12.843: INFO: Pod "pod-e4162672-99cc-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:33:12.846: INFO: Trying to get logs from node hunter-worker2 pod pod-e4162672-99cc-11ea-abcb-0242ac110018 container test-container: STEP: delete the pod May 19 12:33:13.431: INFO: Waiting for pod pod-e4162672-99cc-11ea-abcb-0242ac110018 to disappear May 19 12:33:13.485: INFO: Pod pod-e4162672-99cc-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:33:13.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d4zf8" for this suite. May 19 12:33:19.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:33:19.566: INFO: namespace: e2e-tests-emptydir-d4zf8, resource: bindings, ignored listing per whitelist May 19 12:33:19.569: INFO: namespace e2e-tests-emptydir-d4zf8 deletion completed in 6.081256764s • [SLOW TEST:13.115 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:33:19.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 19 12:33:19.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:19.999: INFO: stderr: "" May 19 12:33:19.999: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 12:33:19.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:20.168: INFO: stderr: "" May 19 12:33:20.168: INFO: stdout: "update-demo-nautilus-58kfv update-demo-nautilus-rrn5h " May 19 12:33:20.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58kfv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:20.280: INFO: stderr: "" May 19 12:33:20.280: INFO: stdout: "" May 19 12:33:20.280: INFO: update-demo-nautilus-58kfv is created but not running May 19 12:33:25.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:25.391: INFO: stderr: "" May 19 12:33:25.391: INFO: stdout: "update-demo-nautilus-58kfv update-demo-nautilus-rrn5h " May 19 12:33:25.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58kfv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:25.488: INFO: stderr: "" May 19 12:33:25.488: INFO: stdout: "true" May 19 12:33:25.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58kfv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:25.593: INFO: stderr: "" May 19 12:33:25.593: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 12:33:25.593: INFO: validating pod update-demo-nautilus-58kfv May 19 12:33:25.597: INFO: got data: { "image": "nautilus.jpg" } May 19 12:33:25.597: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 12:33:25.597: INFO: update-demo-nautilus-58kfv is verified up and running May 19 12:33:25.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrn5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:25.720: INFO: stderr: "" May 19 12:33:25.720: INFO: stdout: "true" May 19 12:33:25.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrn5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:25.815: INFO: stderr: "" May 19 12:33:25.815: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 12:33:25.815: INFO: validating pod update-demo-nautilus-rrn5h May 19 12:33:25.819: INFO: got data: { "image": "nautilus.jpg" } May 19 12:33:25.819: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 12:33:25.819: INFO: update-demo-nautilus-rrn5h is verified up and running STEP: rolling-update to new replication controller May 19 12:33:25.821: INFO: scanned /root for discovery docs: May 19 12:33:25.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:51.410: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 19 12:33:51.410: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 12:33:51.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:51.537: INFO: stderr: "" May 19 12:33:51.537: INFO: stdout: "update-demo-kitten-4wcqx update-demo-kitten-sxhtz update-demo-nautilus-rrn5h " STEP: Replicas for name=update-demo: expected=2 actual=3 May 19 12:33:56.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:56.658: INFO: stderr: "" May 19 12:33:56.658: INFO: stdout: "update-demo-kitten-4wcqx update-demo-kitten-sxhtz " May 19 12:33:56.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4wcqx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:56.763: INFO: stderr: "" May 19 12:33:56.763: INFO: stdout: "true" May 19 12:33:56.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4wcqx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:56.857: INFO: stderr: "" May 19 12:33:56.857: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 19 12:33:56.857: INFO: validating pod update-demo-kitten-4wcqx May 19 12:33:56.864: INFO: got data: { "image": "kitten.jpg" } May 19 12:33:56.864: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 19 12:33:56.864: INFO: update-demo-kitten-4wcqx is verified up and running May 19 12:33:56.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sxhtz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:56.983: INFO: stderr: "" May 19 12:33:56.983: INFO: stdout: "true" May 19 12:33:56.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sxhtz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r9rp6' May 19 12:33:57.084: INFO: stderr: "" May 19 12:33:57.084: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 19 12:33:57.084: INFO: validating pod update-demo-kitten-sxhtz May 19 12:33:57.092: INFO: got data: { "image": "kitten.jpg" } May 19 12:33:57.092: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 19 12:33:57.092: INFO: update-demo-kitten-sxhtz is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:33:57.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r9rp6" for this suite. May 19 12:34:19.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:34:19.135: INFO: namespace: e2e-tests-kubectl-r9rp6, resource: bindings, ignored listing per whitelist May 19 12:34:19.186: INFO: namespace e2e-tests-kubectl-r9rp6 deletion completed in 22.090850603s • [SLOW TEST:59.617 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:34:19.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 19 12:34:19.316: INFO: Waiting up to 5m0s for pod "downward-api-0f563697-99cd-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-lkbvm" to be "success or failure" May 19 12:34:19.328: INFO: Pod "downward-api-0f563697-99cd-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.14749ms May 19 12:34:21.540: INFO: Pod "downward-api-0f563697-99cd-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223949743s May 19 12:34:23.544: INFO: Pod "downward-api-0f563697-99cd-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.228101634s STEP: Saw pod success May 19 12:34:23.544: INFO: Pod "downward-api-0f563697-99cd-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:34:23.547: INFO: Trying to get logs from node hunter-worker2 pod downward-api-0f563697-99cd-11ea-abcb-0242ac110018 container dapi-container: STEP: delete the pod May 19 12:34:23.648: INFO: Waiting for pod downward-api-0f563697-99cd-11ea-abcb-0242ac110018 to disappear May 19 12:34:23.657: INFO: Pod downward-api-0f563697-99cd-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:34:23.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lkbvm" for this suite. May 19 12:34:29.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:34:29.744: INFO: namespace: e2e-tests-downward-api-lkbvm, resource: bindings, ignored listing per whitelist May 19 12:34:29.754: INFO: namespace e2e-tests-downward-api-lkbvm deletion completed in 6.094454284s • [SLOW TEST:10.568 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:34:29.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0519 12:34:39.874105 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 12:34:39.874: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:34:39.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zvlxh" for this suite. May 19 12:34:45.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:34:45.928: INFO: namespace: e2e-tests-gc-zvlxh, resource: bindings, ignored listing per whitelist May 19 12:34:45.972: INFO: namespace e2e-tests-gc-zvlxh deletion completed in 6.094168239s • [SLOW TEST:16.217 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:34:45.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 19 12:34:46.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f4c60b9-99cd-11ea-abcb-0242ac110018" in namespace "e2e-tests-downward-api-nkh7x" to be "success or failure" May 19 12:34:46.086: INFO: Pod "downwardapi-volume-1f4c60b9-99cd-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.673104ms May 19 12:34:48.089: INFO: Pod "downwardapi-volume-1f4c60b9-99cd-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007100121s May 19 12:34:50.094: INFO: Pod "downwardapi-volume-1f4c60b9-99cd-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011443354s STEP: Saw pod success May 19 12:34:50.094: INFO: Pod "downwardapi-volume-1f4c60b9-99cd-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:34:50.096: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1f4c60b9-99cd-11ea-abcb-0242ac110018 container client-container: STEP: delete the pod May 19 12:34:50.127: INFO: Waiting for pod downwardapi-volume-1f4c60b9-99cd-11ea-abcb-0242ac110018 to disappear May 19 12:34:50.336: INFO: Pod downwardapi-volume-1f4c60b9-99cd-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:34:50.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nkh7x" for this suite. May 19 12:34:56.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:34:56.425: INFO: namespace: e2e-tests-downward-api-nkh7x, resource: bindings, ignored listing per whitelist May 19 12:34:56.490: INFO: namespace e2e-tests-downward-api-nkh7x deletion completed in 6.149793212s • [SLOW TEST:10.518 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:34:56.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 19 12:35:01.186: INFO: Successfully updated pod "annotationupdate259976f8-99cd-11ea-abcb-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:35:05.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vq2ls" for this suite. May 19 12:35:27.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:35:27.267: INFO: namespace: e2e-tests-projected-vq2ls, resource: bindings, ignored listing per whitelist May 19 12:35:27.328: INFO: namespace e2e-tests-projected-vq2ls deletion completed in 22.107202123s • [SLOW TEST:30.837 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:35:27.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-37f47cd2-99cd-11ea-abcb-0242ac110018 STEP: Creating a pod to test consume secrets May 19 12:35:27.452: INFO: Waiting up to 5m0s for pod "pod-secrets-37f5f883-99cd-11ea-abcb-0242ac110018" in namespace "e2e-tests-secrets-8pstl" to be "success or failure" May 19 12:35:27.455: INFO: Pod "pod-secrets-37f5f883-99cd-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.774965ms May 19 12:35:29.459: INFO: Pod "pod-secrets-37f5f883-99cd-11ea-abcb-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007187155s May 19 12:35:31.463: INFO: Pod "pod-secrets-37f5f883-99cd-11ea-abcb-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011366194s STEP: Saw pod success May 19 12:35:31.463: INFO: Pod "pod-secrets-37f5f883-99cd-11ea-abcb-0242ac110018" satisfied condition "success or failure" May 19 12:35:31.466: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-37f5f883-99cd-11ea-abcb-0242ac110018 container secret-volume-test: STEP: delete the pod May 19 12:35:31.487: INFO: Waiting for pod pod-secrets-37f5f883-99cd-11ea-abcb-0242ac110018 to disappear May 19 12:35:31.510: INFO: Pod pod-secrets-37f5f883-99cd-11ea-abcb-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:35:31.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8pstl" for this suite. May 19 12:35:37.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:35:37.552: INFO: namespace: e2e-tests-secrets-8pstl, resource: bindings, ignored listing per whitelist May 19 12:35:37.598: INFO: namespace e2e-tests-secrets-8pstl deletion completed in 6.083472736s • [SLOW TEST:10.270 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:35:37.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 19 12:35:37.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vhtvb' May 19 12:35:38.171: INFO: stderr: "" May 19 12:35:38.171: INFO: stdout: "pod/pause created\n" May 19 12:35:38.171: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 19 12:35:38.171: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-vhtvb" to be "running and ready" May 19 12:35:38.189: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 17.945984ms May 19 12:35:40.301: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130120368s May 19 12:35:42.306: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.13465032s May 19 12:35:42.306: INFO: Pod "pause" satisfied condition "running and ready" May 19 12:35:42.306: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 19 12:35:42.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-vhtvb' May 19 12:35:42.421: INFO: stderr: "" May 19 12:35:42.421: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 19 12:35:42.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-vhtvb' May 19 12:35:42.523: INFO: stderr: "" May 19 12:35:42.523: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 19 12:35:42.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-vhtvb' May 19 12:35:42.633: INFO: stderr: "" May 19 12:35:42.633: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 19 12:35:42.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-vhtvb' May 19 12:35:42.732: INFO: stderr: "" May 19 12:35:42.732: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 19 12:35:42.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vhtvb' May 19 12:35:42.851: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 12:35:42.852: INFO: stdout: "pod \"pause\" force deleted\n" May 19 12:35:42.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-vhtvb' May 19 12:35:42.948: INFO: stderr: "No resources found.\n" May 19 12:35:42.948: INFO: stdout: "" May 19 12:35:42.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-vhtvb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 12:35:43.047: INFO: stderr: "" May 19 12:35:43.047: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:35:43.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vhtvb" for this suite. May 19 12:35:49.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:35:49.208: INFO: namespace: e2e-tests-kubectl-vhtvb, resource: bindings, ignored listing per whitelist May 19 12:35:49.235: INFO: namespace e2e-tests-kubectl-vhtvb deletion completed in 6.184151301s • [SLOW TEST:11.637 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:35:49.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:36:21.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-vtw52" for this suite. May 19 12:36:27.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:36:27.814: INFO: namespace: e2e-tests-container-runtime-vtw52, resource: bindings, ignored listing per whitelist May 19 12:36:27.832: INFO: namespace e2e-tests-container-runtime-vtw52 deletion completed in 6.085098373s • [SLOW TEST:38.596 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 19 12:36:27.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 19 12:36:27.933: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 19 12:36:28.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8pfbq" for this suite. May 19 12:36:34.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:36:34.051: INFO: namespace: e2e-tests-kubectl-8pfbq, resource: bindings, ignored listing per whitelist May 19 12:36:34.121: INFO: namespace e2e-tests-kubectl-8pfbq deletion completed in 6.092980291s • [SLOW TEST:6.290 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 19 12:36:34.122: INFO: Running AfterSuite actions on all nodes May 19 12:36:34.122: INFO: Running AfterSuite actions on node 1 May 19 12:36:34.122: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6590.067 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS