I0808 10:47:03.461547 6 e2e.go:224] Starting e2e run "7e255c50-d964-11ea-aaa1-0242ac11000c" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1596883622 - Will randomize all specs Will run 201 of 2164 specs Aug 8 10:47:03.655: INFO: >>> kubeConfig: /root/.kube/config Aug 8 10:47:03.661: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 8 10:47:03.680: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 8 10:47:03.802: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 8 10:47:03.802: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 8 10:47:03.802: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 8 10:47:03.812: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 8 10:47:03.812: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 8 10:47:03.812: INFO: e2e test version: v1.13.12 Aug 8 10:47:03.813: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:47:03.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Aug 8 10:47:04.520: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-m2ddd [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Aug 8 10:47:04.596: INFO: Found 0 stateful pods, waiting for 3 Aug 8 10:47:14.680: INFO: Found 2 stateful pods, waiting for 3 Aug 8 10:47:24.601: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 8 10:47:24.601: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 8 10:47:24.601: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 8 10:47:24.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2ddd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 10:47:24.861: INFO: stderr: "I0808 10:47:24.747233 38 log.go:172] (0xc00016a790) (0xc00068f400) Create stream\nI0808 10:47:24.747292 38 log.go:172] (0xc00016a790) (0xc00068f400) Stream added, broadcasting: 1\nI0808 10:47:24.750528 38 log.go:172] (0xc00016a790) Reply frame received for 1\nI0808 10:47:24.750584 38 log.go:172] (0xc00016a790) (0xc00072e000) Create stream\nI0808 10:47:24.750598 38 log.go:172] (0xc00016a790) (0xc00072e000) Stream added, broadcasting: 3\nI0808 10:47:24.751575 38 log.go:172] (0xc00016a790) Reply frame received for 3\nI0808 10:47:24.751617 38 log.go:172] (0xc00016a790) (0xc00068f4a0) Create stream\nI0808 10:47:24.751630 38 log.go:172] (0xc00016a790) (0xc00068f4a0) Stream added, broadcasting: 5\nI0808 10:47:24.752577 38 log.go:172] (0xc00016a790) Reply frame received for 5\nI0808 10:47:24.854413 38 log.go:172] (0xc00016a790) Data frame received for 3\nI0808 10:47:24.854464 38 log.go:172] (0xc00072e000) (3) Data frame handling\nI0808 10:47:24.854524 38 log.go:172] (0xc00072e000) (3) Data frame sent\nI0808 10:47:24.854575 38 log.go:172] (0xc00016a790) Data frame received for 3\nI0808 10:47:24.854689 38 log.go:172] (0xc00072e000) (3) Data frame handling\nI0808 10:47:24.854994 38 log.go:172] (0xc00016a790) Data frame received for 5\nI0808 10:47:24.855009 38 log.go:172] (0xc00068f4a0) (5) Data frame handling\nI0808 10:47:24.856286 38 log.go:172] (0xc00016a790) Data frame received for 1\nI0808 10:47:24.856306 38 log.go:172] (0xc00068f400) (1) Data frame handling\nI0808 10:47:24.856326 38 log.go:172] (0xc00068f400) (1) Data frame sent\nI0808 10:47:24.856339 38 log.go:172] (0xc00016a790) (0xc00068f400) Stream removed, broadcasting: 1\nI0808 10:47:24.856350 38 log.go:172] (0xc00016a790) Go away received\nI0808 10:47:24.856564 38 log.go:172] (0xc00016a790) (0xc00068f400) Stream removed, broadcasting: 1\nI0808 10:47:24.856577 38 log.go:172] (0xc00016a790) (0xc00072e000) Stream removed, broadcasting: 3\nI0808 10:47:24.856582 38 log.go:172] (0xc00016a790) (0xc00068f4a0) Stream removed, broadcasting: 5\n" Aug 8 10:47:24.861: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 10:47:24.861: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 8 10:47:34.894: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 8 10:47:44.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2ddd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 8 10:47:45.114: INFO: stderr: "I0808 10:47:45.036444 60 log.go:172] (0xc0006a2420) (0xc0006612c0) Create stream\nI0808 10:47:45.036515 60 log.go:172] (0xc0006a2420) (0xc0006612c0) Stream added, broadcasting: 1\nI0808 10:47:45.039843 60 log.go:172] (0xc0006a2420) Reply frame received for 1\nI0808 10:47:45.039882 60 log.go:172] (0xc0006a2420) (0xc000746000) Create stream\nI0808 10:47:45.039892 60 log.go:172] (0xc0006a2420) (0xc000746000) Stream added, broadcasting: 3\nI0808 10:47:45.040893 60 log.go:172] (0xc0006a2420) Reply frame received for 3\nI0808 10:47:45.040952 60 log.go:172] (0xc0006a2420) (0xc0005c0000) Create stream\nI0808 10:47:45.040971 60 log.go:172] (0xc0006a2420) (0xc0005c0000) Stream added, broadcasting: 5\nI0808 10:47:45.041725 60 log.go:172] (0xc0006a2420) Reply frame received for 5\nI0808 10:47:45.108876 60 log.go:172] (0xc0006a2420) Data frame received for 3\nI0808 10:47:45.108913 60 log.go:172] (0xc000746000) (3) Data frame handling\nI0808 10:47:45.108944 60 log.go:172] (0xc000746000) (3) Data frame sent\nI0808 10:47:45.108958 60 log.go:172] (0xc0006a2420) Data frame received for 3\nI0808 10:47:45.108971 60 log.go:172] (0xc000746000) (3) Data frame handling\nI0808 10:47:45.109082 60 log.go:172] (0xc0006a2420) Data frame received for 5\nI0808 10:47:45.109118 60 log.go:172] (0xc0005c0000) (5) Data frame handling\nI0808 10:47:45.110592 60 log.go:172] (0xc0006a2420) Data frame received for 1\nI0808 10:47:45.110611 60 log.go:172] (0xc0006612c0) (1) Data frame handling\nI0808 10:47:45.110618 60 log.go:172] (0xc0006612c0) (1) Data frame sent\nI0808 10:47:45.110626 60 log.go:172] (0xc0006a2420) (0xc0006612c0) Stream removed, broadcasting: 1\nI0808 10:47:45.110667 60 log.go:172] (0xc0006a2420) Go away received\nI0808 10:47:45.110794 60 log.go:172] (0xc0006a2420) (0xc0006612c0) Stream removed, broadcasting: 1\nI0808 10:47:45.110807 60 log.go:172] (0xc0006a2420) (0xc000746000) Stream removed, broadcasting: 3\nI0808 10:47:45.110821 60 log.go:172] (0xc0006a2420) (0xc0005c0000) Stream removed, broadcasting: 5\n" Aug 8 10:47:45.114: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 8 10:47:45.114: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 8 10:48:15.136: INFO: Waiting for StatefulSet e2e-tests-statefulset-m2ddd/ss2 to complete update Aug 8 10:48:15.136: INFO: Waiting for Pod e2e-tests-statefulset-m2ddd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Aug 8 10:48:25.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2ddd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 10:48:25.530: INFO: stderr: "I0808 10:48:25.372285 83 log.go:172] (0xc000138630) (0xc000615400) Create stream\nI0808 10:48:25.372334 83 log.go:172] (0xc000138630) (0xc000615400) Stream added, broadcasting: 1\nI0808 10:48:25.374610 83 log.go:172] (0xc000138630) Reply frame received for 1\nI0808 10:48:25.374660 83 log.go:172] (0xc000138630) (0xc000594000) Create stream\nI0808 10:48:25.374678 83 log.go:172] (0xc000138630) (0xc000594000) Stream added, broadcasting: 3\nI0808 10:48:25.375685 83 log.go:172] (0xc000138630) Reply frame received for 3\nI0808 10:48:25.375733 83 log.go:172] (0xc000138630) (0xc0002e4000) Create stream\nI0808 10:48:25.375753 83 log.go:172] (0xc000138630) (0xc0002e4000) Stream added, broadcasting: 5\nI0808 10:48:25.376693 83 log.go:172] (0xc000138630) Reply frame received for 5\nI0808 10:48:25.520213 83 log.go:172] (0xc000138630) Data frame received for 3\nI0808 10:48:25.520275 83 log.go:172] (0xc000594000) (3) Data frame handling\nI0808 10:48:25.520309 83 log.go:172] (0xc000594000) (3) Data frame sent\nI0808 10:48:25.520327 83 log.go:172] (0xc000138630) Data frame received for 3\nI0808 10:48:25.520344 83 log.go:172] (0xc000594000) (3) Data frame handling\nI0808 10:48:25.520400 83 log.go:172] (0xc000138630) Data frame received for 5\nI0808 10:48:25.520414 83 log.go:172] (0xc0002e4000) (5) Data frame handling\nI0808 10:48:25.523105 83 log.go:172] (0xc000138630) Data frame received for 1\nI0808 10:48:25.523141 83 log.go:172] (0xc000615400) (1) Data frame handling\nI0808 10:48:25.523176 83 log.go:172] (0xc000615400) (1) Data frame sent\nI0808 10:48:25.523210 83 log.go:172] (0xc000138630) (0xc000615400) Stream removed, broadcasting: 1\nI0808 10:48:25.523244 83 log.go:172] (0xc000138630) Go away received\nI0808 10:48:25.523631 83 log.go:172] (0xc000138630) (0xc000615400) Stream removed, broadcasting: 1\nI0808 10:48:25.523666 83 log.go:172] (0xc000138630) (0xc000594000) Stream removed, broadcasting: 3\nI0808 10:48:25.523691 83 log.go:172] (0xc000138630) (0xc0002e4000) Stream removed, broadcasting: 5\n" Aug 8 10:48:25.530: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 10:48:25.530: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 8 10:48:35.566: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 8 10:48:45.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2ddd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 8 10:48:45.817: INFO: stderr: "I0808 10:48:45.713142 106 log.go:172] (0xc000162840) (0xc00059b400) Create stream\nI0808 10:48:45.713241 106 log.go:172] (0xc000162840) (0xc00059b400) Stream added, broadcasting: 1\nI0808 10:48:45.716025 106 log.go:172] (0xc000162840) Reply frame received for 1\nI0808 10:48:45.716074 106 log.go:172] (0xc000162840) (0xc0005ee000) Create stream\nI0808 10:48:45.716089 106 log.go:172] (0xc000162840) (0xc0005ee000) Stream added, broadcasting: 3\nI0808 10:48:45.717306 106 log.go:172] (0xc000162840) Reply frame received for 3\nI0808 10:48:45.717365 106 log.go:172] (0xc000162840) (0xc0007cc000) Create stream\nI0808 10:48:45.717399 106 log.go:172] (0xc000162840) (0xc0007cc000) Stream added, broadcasting: 5\nI0808 10:48:45.718419 106 log.go:172] (0xc000162840) Reply frame received for 5\nI0808 10:48:45.810220 106 log.go:172] (0xc000162840) Data frame received for 3\nI0808 10:48:45.810250 106 log.go:172] (0xc0005ee000) (3) Data frame handling\nI0808 10:48:45.810263 106 log.go:172] (0xc0005ee000) (3) Data frame sent\nI0808 10:48:45.810299 106 log.go:172] (0xc000162840) Data frame received for 3\nI0808 10:48:45.810320 106 log.go:172] (0xc0005ee000) (3) Data frame handling\nI0808 10:48:45.810373 106 log.go:172] (0xc000162840) Data frame received for 5\nI0808 10:48:45.810412 106 log.go:172] (0xc0007cc000) (5) Data frame handling\nI0808 10:48:45.812160 106 log.go:172] (0xc000162840) Data frame received for 1\nI0808 10:48:45.812210 106 log.go:172] (0xc00059b400) (1) Data frame handling\nI0808 10:48:45.812246 106 log.go:172] (0xc00059b400) (1) Data frame sent\nI0808 10:48:45.812289 106 log.go:172] (0xc000162840) (0xc00059b400) Stream removed, broadcasting: 1\nI0808 10:48:45.812311 106 log.go:172] (0xc000162840) Go away received\nI0808 10:48:45.812602 106 log.go:172] (0xc000162840) (0xc00059b400) Stream removed, broadcasting: 1\nI0808 10:48:45.812648 106 log.go:172] (0xc000162840) (0xc0005ee000) Stream removed, broadcasting: 3\nI0808 10:48:45.812679 106 log.go:172] (0xc000162840) (0xc0007cc000) Stream removed, broadcasting: 5\n" Aug 8 10:48:45.817: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 8 10:48:45.817: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 8 10:48:55.836: INFO: Waiting for StatefulSet e2e-tests-statefulset-m2ddd/ss2 to complete update Aug 8 10:48:55.836: INFO: Waiting for Pod e2e-tests-statefulset-m2ddd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 8 10:48:55.836: INFO: Waiting for Pod e2e-tests-statefulset-m2ddd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 8 10:48:55.836: INFO: Waiting for Pod e2e-tests-statefulset-m2ddd/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 8 10:49:05.845: INFO: Waiting for StatefulSet e2e-tests-statefulset-m2ddd/ss2 to complete update Aug 8 10:49:05.845: INFO: Waiting for Pod e2e-tests-statefulset-m2ddd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 8 10:49:05.845: INFO: Waiting for Pod e2e-tests-statefulset-m2ddd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 8 10:49:15.851: INFO: Waiting for StatefulSet e2e-tests-statefulset-m2ddd/ss2 to complete update Aug 8 10:49:15.852: INFO: Waiting for Pod e2e-tests-statefulset-m2ddd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 8 10:49:25.845: INFO: Deleting all statefulset in ns e2e-tests-statefulset-m2ddd Aug 8 10:49:25.847: INFO: Scaling statefulset ss2 to 0 Aug 8 10:49:55.860: INFO: Waiting for statefulset status.replicas updated to 0 Aug 8 10:49:55.863: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:49:55.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-m2ddd" for this suite. Aug 8 10:50:05.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:50:06.034: INFO: namespace: e2e-tests-statefulset-m2ddd, resource: bindings, ignored listing per whitelist Aug 8 10:50:06.052: INFO: namespace e2e-tests-statefulset-m2ddd deletion completed in 10.174946233s • [SLOW TEST:182.238 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:50:06.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 10:50:06.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Aug 8 10:50:06.360: INFO: stderr: "" Aug 8 10:50:06.360: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:50:06.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hjxlq" for this suite. Aug 8 10:50:12.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:50:12.418: INFO: namespace: e2e-tests-kubectl-hjxlq, resource: bindings, ignored listing per whitelist Aug 8 10:50:12.456: INFO: namespace e2e-tests-kubectl-hjxlq deletion completed in 6.079956241s • [SLOW TEST:6.404 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:50:12.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 8 10:50:12.584: INFO: Waiting up to 5m0s for pod "pod-ef77cc8a-d964-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-bckw9" to be "success or failure" Aug 8 10:50:12.599: INFO: Pod "pod-ef77cc8a-d964-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.782677ms Aug 8 10:50:14.602: INFO: Pod "pod-ef77cc8a-d964-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018257965s Aug 8 10:50:16.735: INFO: Pod "pod-ef77cc8a-d964-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151177125s STEP: Saw pod success Aug 8 10:50:16.735: INFO: Pod "pod-ef77cc8a-d964-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:50:16.739: INFO: Trying to get logs from node hunter-worker pod pod-ef77cc8a-d964-11ea-aaa1-0242ac11000c container test-container: STEP: delete the pod Aug 8 10:50:16.779: INFO: Waiting for pod pod-ef77cc8a-d964-11ea-aaa1-0242ac11000c to disappear Aug 8 10:50:16.926: INFO: Pod pod-ef77cc8a-d964-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:50:16.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bckw9" for this suite. Aug 8 10:50:22.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:50:23.007: INFO: namespace: e2e-tests-emptydir-bckw9, resource: bindings, ignored listing per whitelist Aug 8 10:50:23.028: INFO: namespace e2e-tests-emptydir-bckw9 deletion completed in 6.097900623s • [SLOW TEST:10.571 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:50:23.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-f60af5ca-d964-11ea-aaa1-0242ac11000c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-f60af5ca-d964-11ea-aaa1-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:50:29.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cqqhg" for this suite. Aug 8 10:50:54.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:50:54.047: INFO: namespace: e2e-tests-projected-cqqhg, resource: bindings, ignored listing per whitelist Aug 8 10:50:54.183: INFO: namespace e2e-tests-projected-cqqhg deletion completed in 24.267307847s • [SLOW TEST:31.154 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:50:54.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 10:50:54.371: INFO: Waiting up to 5m0s for pod "downwardapi-volume-085b9042-d965-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-ht7dl" to be "success or failure" Aug 8 10:50:54.405: INFO: Pod "downwardapi-volume-085b9042-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.503412ms Aug 8 10:50:56.437: INFO: Pod "downwardapi-volume-085b9042-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06617063s Aug 8 10:50:58.508: INFO: Pod "downwardapi-volume-085b9042-d965-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.137798889s Aug 8 10:51:00.513: INFO: Pod "downwardapi-volume-085b9042-d965-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142468264s STEP: Saw pod success Aug 8 10:51:00.513: INFO: Pod "downwardapi-volume-085b9042-d965-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:51:00.516: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-085b9042-d965-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 10:51:00.544: INFO: Waiting for pod downwardapi-volume-085b9042-d965-11ea-aaa1-0242ac11000c to disappear Aug 8 10:51:00.603: INFO: Pod downwardapi-volume-085b9042-d965-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:51:00.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ht7dl" for this suite. Aug 8 10:51:06.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:51:06.679: INFO: namespace: e2e-tests-downward-api-ht7dl, resource: bindings, ignored listing per whitelist Aug 8 10:51:06.753: INFO: namespace e2e-tests-downward-api-ht7dl deletion completed in 6.145832523s • [SLOW TEST:12.571 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:51:06.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Aug 8 10:51:06.874: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-925px" to be "success or failure" Aug 8 10:51:06.890: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.93471ms Aug 8 10:51:08.895: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020239431s Aug 8 10:51:10.939: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064713286s Aug 8 10:51:12.943: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068545854s STEP: Saw pod success Aug 8 10:51:12.943: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 8 10:51:12.946: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 8 10:51:13.266: INFO: Waiting for pod pod-host-path-test to disappear Aug 8 10:51:13.394: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:51:13.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-925px" for this suite. Aug 8 10:51:21.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:51:21.731: INFO: namespace: e2e-tests-hostpath-925px, resource: bindings, ignored listing per whitelist Aug 8 10:51:21.751: INFO: namespace e2e-tests-hostpath-925px deletion completed in 8.350981159s • [SLOW TEST:14.998 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:51:21.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-18e3c88d-d965-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 10:51:22.262: INFO: Waiting up to 5m0s for pod "pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-secrets-2j6tw" to be "success or failure" Aug 8 10:51:22.357: INFO: Pod "pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 94.758964ms Aug 8 10:51:24.361: INFO: Pod "pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098842863s Aug 8 10:51:26.550: INFO: Pod "pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288090803s Aug 8 10:51:28.630: INFO: Pod "pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 6.367941665s Aug 8 10:51:30.635: INFO: Pod "pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.372296302s STEP: Saw pod success Aug 8 10:51:30.635: INFO: Pod "pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:51:30.637: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 8 10:51:30.773: INFO: Waiting for pod pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c to disappear Aug 8 10:51:30.796: INFO: Pod pod-secrets-18ffeb65-d965-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:51:30.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2j6tw" for this suite. Aug 8 10:51:36.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:51:37.003: INFO: namespace: e2e-tests-secrets-2j6tw, resource: bindings, ignored listing per whitelist Aug 8 10:51:37.003: INFO: namespace e2e-tests-secrets-2j6tw deletion completed in 6.200312063s • [SLOW TEST:15.252 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:51:37.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-hf9fk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hf9fk to expose endpoints map[] Aug 8 10:51:37.267: INFO: Get endpoints failed (15.391741ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Aug 8 10:51:38.271: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hf9fk exposes endpoints map[] (1.018973548s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-hf9fk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hf9fk to expose endpoints map[pod1:[80]] Aug 8 10:51:42.384: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hf9fk exposes endpoints map[pod1:[80]] (4.107743157s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-hf9fk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hf9fk to expose endpoints map[pod1:[80] pod2:[80]] Aug 8 10:51:46.542: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hf9fk exposes endpoints map[pod1:[80] pod2:[80]] (4.155269568s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-hf9fk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hf9fk to expose endpoints map[pod2:[80]] Aug 8 10:51:47.605: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hf9fk exposes endpoints map[pod2:[80]] (1.059790104s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-hf9fk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hf9fk to expose endpoints map[] Aug 8 10:51:48.676: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hf9fk exposes endpoints map[] (1.066965938s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:51:48.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-hf9fk" for this suite. Aug 8 10:52:10.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:52:10.956: INFO: namespace: e2e-tests-services-hf9fk, resource: bindings, ignored listing per whitelist Aug 8 10:52:11.028: INFO: namespace e2e-tests-services-hf9fk deletion completed in 22.158764217s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:34.025 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:52:11.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 8 10:52:11.145: INFO: Waiting up to 5m0s for pod "pod-361ee21f-d965-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-6rm8v" to be "success or failure" Aug 8 10:52:11.154: INFO: Pod "pod-361ee21f-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.401713ms Aug 8 10:52:13.365: INFO: Pod "pod-361ee21f-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220311289s Aug 8 10:52:15.369: INFO: Pod "pod-361ee21f-d965-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.224117772s STEP: Saw pod success Aug 8 10:52:15.369: INFO: Pod "pod-361ee21f-d965-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:52:15.372: INFO: Trying to get logs from node hunter-worker pod pod-361ee21f-d965-11ea-aaa1-0242ac11000c container test-container: STEP: delete the pod Aug 8 10:52:15.539: INFO: Waiting for pod pod-361ee21f-d965-11ea-aaa1-0242ac11000c to disappear Aug 8 10:52:15.542: INFO: Pod pod-361ee21f-d965-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:52:15.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6rm8v" for this suite. Aug 8 10:52:21.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:52:21.697: INFO: namespace: e2e-tests-emptydir-6rm8v, resource: bindings, ignored listing per whitelist Aug 8 10:52:21.708: INFO: namespace e2e-tests-emptydir-6rm8v deletion completed in 6.130016547s • [SLOW TEST:10.679 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:52:21.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:52:25.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-l9sg9" for this suite. Aug 8 10:52:32.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:52:32.058: INFO: namespace: e2e-tests-emptydir-wrapper-l9sg9, resource: bindings, ignored listing per whitelist Aug 8 10:52:32.093: INFO: namespace e2e-tests-emptydir-wrapper-l9sg9 deletion completed in 6.117271176s • [SLOW TEST:10.385 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:52:32.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-42adf89e-d965-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 10:52:32.223: INFO: Waiting up to 5m0s for pod "pod-secrets-42b0dae6-d965-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-secrets-qtjcb" to be "success or failure" Aug 8 10:52:32.237: INFO: Pod "pod-secrets-42b0dae6-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.375535ms Aug 8 10:52:34.288: INFO: Pod "pod-secrets-42b0dae6-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064380488s Aug 8 10:52:36.292: INFO: Pod "pod-secrets-42b0dae6-d965-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068503756s STEP: Saw pod success Aug 8 10:52:36.292: INFO: Pod "pod-secrets-42b0dae6-d965-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:52:36.295: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-42b0dae6-d965-11ea-aaa1-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 8 10:52:36.334: INFO: Waiting for pod pod-secrets-42b0dae6-d965-11ea-aaa1-0242ac11000c to disappear Aug 8 10:52:36.343: INFO: Pod pod-secrets-42b0dae6-d965-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:52:36.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qtjcb" for this suite. Aug 8 10:52:42.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:52:42.631: INFO: namespace: e2e-tests-secrets-qtjcb, resource: bindings, ignored listing per whitelist Aug 8 10:52:42.633: INFO: namespace e2e-tests-secrets-qtjcb deletion completed in 6.286513711s • [SLOW TEST:10.540 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:52:42.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-qsx2g STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-qsx2g STEP: Deleting pre-stop pod Aug 8 10:52:55.834: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:52:55.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-qsx2g" for this suite. Aug 8 10:53:35.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:53:35.893: INFO: namespace: e2e-tests-prestop-qsx2g, resource: bindings, ignored listing per whitelist Aug 8 10:53:35.946: INFO: namespace e2e-tests-prestop-qsx2g deletion completed in 40.09318657s • [SLOW TEST:53.313 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:53:35.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 10:53:36.083: INFO: Creating ReplicaSet my-hostname-basic-68c4363c-d965-11ea-aaa1-0242ac11000c Aug 8 10:53:36.098: INFO: Pod name my-hostname-basic-68c4363c-d965-11ea-aaa1-0242ac11000c: Found 0 pods out of 1 Aug 8 10:53:41.102: INFO: Pod name my-hostname-basic-68c4363c-d965-11ea-aaa1-0242ac11000c: Found 1 pods out of 1 Aug 8 10:53:41.102: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-68c4363c-d965-11ea-aaa1-0242ac11000c" is running Aug 8 10:53:41.104: INFO: Pod "my-hostname-basic-68c4363c-d965-11ea-aaa1-0242ac11000c-vst8l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-08 10:53:36 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-08 10:53:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-08 10:53:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-08 10:53:36 +0000 UTC Reason: Message:}]) Aug 8 10:53:41.104: INFO: Trying to dial the pod Aug 8 10:53:46.272: INFO: Controller my-hostname-basic-68c4363c-d965-11ea-aaa1-0242ac11000c: Got expected result from replica 1 [my-hostname-basic-68c4363c-d965-11ea-aaa1-0242ac11000c-vst8l]: "my-hostname-basic-68c4363c-d965-11ea-aaa1-0242ac11000c-vst8l", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:53:46.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-n92vn" for this suite. Aug 8 10:53:52.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:53:52.648: INFO: namespace: e2e-tests-replicaset-n92vn, resource: bindings, ignored listing per whitelist Aug 8 10:53:52.693: INFO: namespace e2e-tests-replicaset-n92vn deletion completed in 6.418039823s • [SLOW TEST:16.747 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:53:52.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-72b815d2-d965-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 10:53:52.799: INFO: Waiting up to 5m0s for pod "pod-configmaps-72b9a0ab-d965-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-configmap-gd7l2" to be "success or failure" Aug 8 10:53:52.803: INFO: Pod "pod-configmaps-72b9a0ab-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215727ms Aug 8 10:53:54.807: INFO: Pod "pod-configmaps-72b9a0ab-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008388797s Aug 8 10:53:56.810: INFO: Pod "pod-configmaps-72b9a0ab-d965-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.011474735s Aug 8 10:53:58.814: INFO: Pod "pod-configmaps-72b9a0ab-d965-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015369077s STEP: Saw pod success Aug 8 10:53:58.814: INFO: Pod "pod-configmaps-72b9a0ab-d965-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:53:58.817: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-72b9a0ab-d965-11ea-aaa1-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 8 10:53:58.834: INFO: Waiting for pod pod-configmaps-72b9a0ab-d965-11ea-aaa1-0242ac11000c to disappear Aug 8 10:53:58.839: INFO: Pod pod-configmaps-72b9a0ab-d965-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:53:58.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gd7l2" for this suite. Aug 8 10:54:05.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:54:05.271: INFO: namespace: e2e-tests-configmap-gd7l2, resource: bindings, ignored listing per whitelist Aug 8 10:54:05.286: INFO: namespace e2e-tests-configmap-gd7l2 deletion completed in 6.444629399s • [SLOW TEST:12.593 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:54:05.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 8 10:54:05.461: INFO: Waiting up to 5m0s for pod "pod-7a44fb4d-d965-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-2k8sg" to be "success or failure" Aug 8 10:54:05.552: INFO: Pod "pod-7a44fb4d-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 91.665792ms Aug 8 10:54:07.608: INFO: Pod "pod-7a44fb4d-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147241301s Aug 8 10:54:09.612: INFO: Pod "pod-7a44fb4d-d965-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151048226s STEP: Saw pod success Aug 8 10:54:09.612: INFO: Pod "pod-7a44fb4d-d965-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:54:09.614: INFO: Trying to get logs from node hunter-worker2 pod pod-7a44fb4d-d965-11ea-aaa1-0242ac11000c container test-container: STEP: delete the pod Aug 8 10:54:09.692: INFO: Waiting for pod pod-7a44fb4d-d965-11ea-aaa1-0242ac11000c to disappear Aug 8 10:54:09.710: INFO: Pod pod-7a44fb4d-d965-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:54:09.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2k8sg" for this suite. Aug 8 10:54:15.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:54:15.781: INFO: namespace: e2e-tests-emptydir-2k8sg, resource: bindings, ignored listing per whitelist Aug 8 10:54:15.790: INFO: namespace e2e-tests-emptydir-2k8sg deletion completed in 6.076210133s • [SLOW TEST:10.504 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:54:15.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 8 10:54:27.937: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:27.937: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:27.963967 6 log.go:172] (0xc000015970) (0xc001dacfa0) Create stream I0808 10:54:27.964005 6 log.go:172] (0xc000015970) (0xc001dacfa0) Stream added, broadcasting: 1 I0808 10:54:27.966304 6 log.go:172] (0xc000015970) Reply frame received for 1 I0808 10:54:27.966352 6 log.go:172] (0xc000015970) (0xc0012e01e0) Create stream I0808 10:54:27.966364 6 log.go:172] (0xc000015970) (0xc0012e01e0) Stream added, broadcasting: 3 I0808 10:54:27.967276 6 log.go:172] (0xc000015970) Reply frame received for 3 I0808 10:54:27.967320 6 log.go:172] (0xc000015970) (0xc001dad040) Create stream I0808 10:54:27.967340 6 log.go:172] (0xc000015970) (0xc001dad040) Stream added, broadcasting: 5 I0808 10:54:27.968281 6 log.go:172] (0xc000015970) Reply frame received for 5 I0808 10:54:28.054246 6 log.go:172] (0xc000015970) Data frame received for 5 I0808 10:54:28.054294 6 log.go:172] (0xc000015970) Data frame received for 3 I0808 10:54:28.054329 6 log.go:172] (0xc0012e01e0) (3) Data frame handling I0808 10:54:28.054339 6 log.go:172] (0xc0012e01e0) (3) Data frame sent I0808 10:54:28.054350 6 log.go:172] (0xc000015970) Data frame received for 3 I0808 10:54:28.054364 6 log.go:172] (0xc0012e01e0) (3) Data frame handling I0808 10:54:28.054394 6 log.go:172] (0xc001dad040) (5) Data frame handling I0808 10:54:28.055504 6 log.go:172] (0xc000015970) Data frame received for 1 I0808 10:54:28.055527 6 log.go:172] (0xc001dacfa0) (1) Data frame handling I0808 10:54:28.055538 6 log.go:172] (0xc001dacfa0) (1) Data frame sent I0808 10:54:28.055561 6 log.go:172] (0xc000015970) (0xc001dacfa0) Stream removed, broadcasting: 1 I0808 10:54:28.055585 6 log.go:172] (0xc000015970) Go away received I0808 10:54:28.055756 6 log.go:172] (0xc000015970) (0xc001dacfa0) Stream removed, broadcasting: 1 I0808 10:54:28.055777 6 log.go:172] (0xc000015970) (0xc0012e01e0) Stream removed, broadcasting: 3 I0808 10:54:28.055785 6 log.go:172] (0xc000015970) (0xc001dad040) Stream removed, broadcasting: 5 Aug 8 10:54:28.055: INFO: Exec stderr: "" Aug 8 10:54:28.055: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:28.055: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:28.086718 6 log.go:172] (0xc000e3e2c0) (0xc001dad2c0) Create stream I0808 10:54:28.086749 6 log.go:172] (0xc000e3e2c0) (0xc001dad2c0) Stream added, broadcasting: 1 I0808 10:54:28.095830 6 log.go:172] (0xc000e3e2c0) Reply frame received for 1 I0808 10:54:28.095877 6 log.go:172] (0xc000e3e2c0) (0xc0012e0280) Create stream I0808 10:54:28.095895 6 log.go:172] (0xc000e3e2c0) (0xc0012e0280) Stream added, broadcasting: 3 I0808 10:54:28.097460 6 log.go:172] (0xc000e3e2c0) Reply frame received for 3 I0808 10:54:28.097488 6 log.go:172] (0xc000e3e2c0) (0xc0012e0320) Create stream I0808 10:54:28.097496 6 log.go:172] (0xc000e3e2c0) (0xc0012e0320) Stream added, broadcasting: 5 I0808 10:54:28.099293 6 log.go:172] (0xc000e3e2c0) Reply frame received for 5 I0808 10:54:28.157758 6 log.go:172] (0xc000e3e2c0) Data frame received for 3 I0808 10:54:28.157822 6 log.go:172] (0xc0012e0280) (3) Data frame handling I0808 10:54:28.157851 6 log.go:172] (0xc0012e0280) (3) Data frame sent I0808 10:54:28.157869 6 log.go:172] (0xc000e3e2c0) Data frame received for 3 I0808 10:54:28.157884 6 log.go:172] (0xc0012e0280) (3) Data frame handling I0808 10:54:28.157918 6 log.go:172] (0xc000e3e2c0) Data frame received for 5 I0808 10:54:28.157958 6 log.go:172] (0xc0012e0320) (5) Data frame handling I0808 10:54:28.159698 6 log.go:172] (0xc000e3e2c0) Data frame received for 1 I0808 10:54:28.159718 6 log.go:172] (0xc001dad2c0) (1) Data frame handling I0808 10:54:28.159735 6 log.go:172] (0xc001dad2c0) (1) Data frame sent I0808 10:54:28.159755 6 log.go:172] (0xc000e3e2c0) (0xc001dad2c0) Stream removed, broadcasting: 1 I0808 10:54:28.159866 6 log.go:172] (0xc000e3e2c0) Go away received I0808 10:54:28.159912 6 log.go:172] (0xc000e3e2c0) (0xc001dad2c0) Stream removed, broadcasting: 1 I0808 10:54:28.159985 6 log.go:172] (0xc000e3e2c0) (0xc0012e0280) Stream removed, broadcasting: 3 I0808 10:54:28.160001 6 log.go:172] (0xc000e3e2c0) (0xc0012e0320) Stream removed, broadcasting: 5 Aug 8 10:54:28.160: INFO: Exec stderr: "" Aug 8 10:54:28.160: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:28.160: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:28.204977 6 log.go:172] (0xc000e3e790) (0xc001dad540) Create stream I0808 10:54:28.205036 6 log.go:172] (0xc000e3e790) (0xc001dad540) Stream added, broadcasting: 1 I0808 10:54:28.207576 6 log.go:172] (0xc000e3e790) Reply frame received for 1 I0808 10:54:28.207598 6 log.go:172] (0xc000e3e790) (0xc001dad5e0) Create stream I0808 10:54:28.207606 6 log.go:172] (0xc000e3e790) (0xc001dad5e0) Stream added, broadcasting: 3 I0808 10:54:28.208514 6 log.go:172] (0xc000e3e790) Reply frame received for 3 I0808 10:54:28.208536 6 log.go:172] (0xc000e3e790) (0xc0012e03c0) Create stream I0808 10:54:28.208544 6 log.go:172] (0xc000e3e790) (0xc0012e03c0) Stream added, broadcasting: 5 I0808 10:54:28.209619 6 log.go:172] (0xc000e3e790) Reply frame received for 5 I0808 10:54:28.271416 6 log.go:172] (0xc000e3e790) Data frame received for 5 I0808 10:54:28.271462 6 log.go:172] (0xc0012e03c0) (5) Data frame handling I0808 10:54:28.271494 6 log.go:172] (0xc000e3e790) Data frame received for 3 I0808 10:54:28.271512 6 log.go:172] (0xc001dad5e0) (3) Data frame handling I0808 10:54:28.271535 6 log.go:172] (0xc001dad5e0) (3) Data frame sent I0808 10:54:28.271559 6 log.go:172] (0xc000e3e790) Data frame received for 3 I0808 10:54:28.271572 6 log.go:172] (0xc001dad5e0) (3) Data frame handling I0808 10:54:28.272935 6 log.go:172] (0xc000e3e790) Data frame received for 1 I0808 10:54:28.272987 6 log.go:172] (0xc001dad540) (1) Data frame handling I0808 10:54:28.273036 6 log.go:172] (0xc001dad540) (1) Data frame sent I0808 10:54:28.273061 6 log.go:172] (0xc000e3e790) (0xc001dad540) Stream removed, broadcasting: 1 I0808 10:54:28.273076 6 log.go:172] (0xc000e3e790) Go away received I0808 10:54:28.273205 6 log.go:172] (0xc000e3e790) (0xc001dad540) Stream removed, broadcasting: 1 I0808 10:54:28.273220 6 log.go:172] (0xc000e3e790) (0xc001dad5e0) Stream removed, broadcasting: 3 I0808 10:54:28.273233 6 log.go:172] (0xc000e3e790) (0xc0012e03c0) Stream removed, broadcasting: 5 Aug 8 10:54:28.273: INFO: Exec stderr: "" Aug 8 10:54:28.273: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:28.273: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:28.300497 6 log.go:172] (0xc00093b810) (0xc001e8e960) Create stream I0808 10:54:28.300521 6 log.go:172] (0xc00093b810) (0xc001e8e960) Stream added, broadcasting: 1 I0808 10:54:28.303276 6 log.go:172] (0xc00093b810) Reply frame received for 1 I0808 10:54:28.303322 6 log.go:172] (0xc00093b810) (0xc0012e0460) Create stream I0808 10:54:28.303338 6 log.go:172] (0xc00093b810) (0xc0012e0460) Stream added, broadcasting: 3 I0808 10:54:28.304334 6 log.go:172] (0xc00093b810) Reply frame received for 3 I0808 10:54:28.304361 6 log.go:172] (0xc00093b810) (0xc0012e0500) Create stream I0808 10:54:28.304370 6 log.go:172] (0xc00093b810) (0xc0012e0500) Stream added, broadcasting: 5 I0808 10:54:28.305392 6 log.go:172] (0xc00093b810) Reply frame received for 5 I0808 10:54:28.372546 6 log.go:172] (0xc00093b810) Data frame received for 5 I0808 10:54:28.372592 6 log.go:172] (0xc0012e0500) (5) Data frame handling I0808 10:54:28.372632 6 log.go:172] (0xc00093b810) Data frame received for 3 I0808 10:54:28.372648 6 log.go:172] (0xc0012e0460) (3) Data frame handling I0808 10:54:28.372669 6 log.go:172] (0xc0012e0460) (3) Data frame sent I0808 10:54:28.372683 6 log.go:172] (0xc00093b810) Data frame received for 3 I0808 10:54:28.372700 6 log.go:172] (0xc0012e0460) (3) Data frame handling I0808 10:54:28.374562 6 log.go:172] (0xc00093b810) Data frame received for 1 I0808 10:54:28.374583 6 log.go:172] (0xc001e8e960) (1) Data frame handling I0808 10:54:28.374597 6 log.go:172] (0xc001e8e960) (1) Data frame sent I0808 10:54:28.374691 6 log.go:172] (0xc00093b810) (0xc001e8e960) Stream removed, broadcasting: 1 I0808 10:54:28.374779 6 log.go:172] (0xc00093b810) (0xc001e8e960) Stream removed, broadcasting: 1 I0808 10:54:28.374801 6 log.go:172] (0xc00093b810) (0xc0012e0460) Stream removed, broadcasting: 3 I0808 10:54:28.374975 6 log.go:172] (0xc00093b810) (0xc0012e0500) Stream removed, broadcasting: 5 I0808 10:54:28.375069 6 log.go:172] (0xc00093b810) Go away received Aug 8 10:54:28.375: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 8 10:54:28.375: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:28.375: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:28.405536 6 log.go:172] (0xc0005ae2c0) (0xc001efb7c0) Create stream I0808 10:54:28.405566 6 log.go:172] (0xc0005ae2c0) (0xc001efb7c0) Stream added, broadcasting: 1 I0808 10:54:28.407682 6 log.go:172] (0xc0005ae2c0) Reply frame received for 1 I0808 10:54:28.407713 6 log.go:172] (0xc0005ae2c0) (0xc001e8ea00) Create stream I0808 10:54:28.407724 6 log.go:172] (0xc0005ae2c0) (0xc001e8ea00) Stream added, broadcasting: 3 I0808 10:54:28.408636 6 log.go:172] (0xc0005ae2c0) Reply frame received for 3 I0808 10:54:28.408662 6 log.go:172] (0xc0005ae2c0) (0xc001dad680) Create stream I0808 10:54:28.408678 6 log.go:172] (0xc0005ae2c0) (0xc001dad680) Stream added, broadcasting: 5 I0808 10:54:28.409835 6 log.go:172] (0xc0005ae2c0) Reply frame received for 5 I0808 10:54:28.488389 6 log.go:172] (0xc0005ae2c0) Data frame received for 3 I0808 10:54:28.488417 6 log.go:172] (0xc001e8ea00) (3) Data frame handling I0808 10:54:28.488431 6 log.go:172] (0xc001e8ea00) (3) Data frame sent I0808 10:54:28.488439 6 log.go:172] (0xc0005ae2c0) Data frame received for 3 I0808 10:54:28.488446 6 log.go:172] (0xc001e8ea00) (3) Data frame handling I0808 10:54:28.488473 6 log.go:172] (0xc0005ae2c0) Data frame received for 5 I0808 10:54:28.488482 6 log.go:172] (0xc001dad680) (5) Data frame handling I0808 10:54:28.489600 6 log.go:172] (0xc0005ae2c0) Data frame received for 1 I0808 10:54:28.489624 6 log.go:172] (0xc001efb7c0) (1) Data frame handling I0808 10:54:28.489673 6 log.go:172] (0xc001efb7c0) (1) Data frame sent I0808 10:54:28.489694 6 log.go:172] (0xc0005ae2c0) (0xc001efb7c0) Stream removed, broadcasting: 1 I0808 10:54:28.489723 6 log.go:172] (0xc0005ae2c0) Go away received I0808 10:54:28.489824 6 log.go:172] (0xc0005ae2c0) (0xc001efb7c0) Stream removed, broadcasting: 1 I0808 10:54:28.489849 6 log.go:172] (0xc0005ae2c0) (0xc001e8ea00) Stream removed, broadcasting: 3 I0808 10:54:28.489861 6 log.go:172] (0xc0005ae2c0) (0xc001dad680) Stream removed, broadcasting: 5 Aug 8 10:54:28.489: INFO: Exec stderr: "" Aug 8 10:54:28.489: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:28.489: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:28.513532 6 log.go:172] (0xc0014002c0) (0xc0012e0820) Create stream I0808 10:54:28.513564 6 log.go:172] (0xc0014002c0) (0xc0012e0820) Stream added, broadcasting: 1 I0808 10:54:28.516234 6 log.go:172] (0xc0014002c0) Reply frame received for 1 I0808 10:54:28.516275 6 log.go:172] (0xc0014002c0) (0xc0012e08c0) Create stream I0808 10:54:28.516289 6 log.go:172] (0xc0014002c0) (0xc0012e08c0) Stream added, broadcasting: 3 I0808 10:54:28.517138 6 log.go:172] (0xc0014002c0) Reply frame received for 3 I0808 10:54:28.517167 6 log.go:172] (0xc0014002c0) (0xc001e8eaa0) Create stream I0808 10:54:28.517182 6 log.go:172] (0xc0014002c0) (0xc001e8eaa0) Stream added, broadcasting: 5 I0808 10:54:28.517930 6 log.go:172] (0xc0014002c0) Reply frame received for 5 I0808 10:54:28.577706 6 log.go:172] (0xc0014002c0) Data frame received for 3 I0808 10:54:28.577756 6 log.go:172] (0xc0012e08c0) (3) Data frame handling I0808 10:54:28.577777 6 log.go:172] (0xc0012e08c0) (3) Data frame sent I0808 10:54:28.577809 6 log.go:172] (0xc0014002c0) Data frame received for 3 I0808 10:54:28.577853 6 log.go:172] (0xc0012e08c0) (3) Data frame handling I0808 10:54:28.577878 6 log.go:172] (0xc0014002c0) Data frame received for 5 I0808 10:54:28.577895 6 log.go:172] (0xc001e8eaa0) (5) Data frame handling I0808 10:54:28.579206 6 log.go:172] (0xc0014002c0) Data frame received for 1 I0808 10:54:28.579230 6 log.go:172] (0xc0012e0820) (1) Data frame handling I0808 10:54:28.579246 6 log.go:172] (0xc0012e0820) (1) Data frame sent I0808 10:54:28.579267 6 log.go:172] (0xc0014002c0) (0xc0012e0820) Stream removed, broadcasting: 1 I0808 10:54:28.579279 6 log.go:172] (0xc0014002c0) Go away received I0808 10:54:28.579457 6 log.go:172] (0xc0014002c0) (0xc0012e0820) Stream removed, broadcasting: 1 I0808 10:54:28.579497 6 log.go:172] (0xc0014002c0) (0xc0012e08c0) Stream removed, broadcasting: 3 I0808 10:54:28.579522 6 log.go:172] (0xc0014002c0) (0xc001e8eaa0) Stream removed, broadcasting: 5 Aug 8 10:54:28.579: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 8 10:54:28.579: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:28.579: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:28.615509 6 log.go:172] (0xc0005ae790) (0xc001efba40) Create stream I0808 10:54:28.615537 6 log.go:172] (0xc0005ae790) (0xc001efba40) Stream added, broadcasting: 1 I0808 10:54:28.618455 6 log.go:172] (0xc0005ae790) Reply frame received for 1 I0808 10:54:28.618496 6 log.go:172] (0xc0005ae790) (0xc0012e0960) Create stream I0808 10:54:28.618506 6 log.go:172] (0xc0005ae790) (0xc0012e0960) Stream added, broadcasting: 3 I0808 10:54:28.619684 6 log.go:172] (0xc0005ae790) Reply frame received for 3 I0808 10:54:28.619738 6 log.go:172] (0xc0005ae790) (0xc0012e0a00) Create stream I0808 10:54:28.619753 6 log.go:172] (0xc0005ae790) (0xc0012e0a00) Stream added, broadcasting: 5 I0808 10:54:28.620607 6 log.go:172] (0xc0005ae790) Reply frame received for 5 I0808 10:54:28.683715 6 log.go:172] (0xc0005ae790) Data frame received for 5 I0808 10:54:28.683740 6 log.go:172] (0xc0012e0a00) (5) Data frame handling I0808 10:54:28.683786 6 log.go:172] (0xc0005ae790) Data frame received for 3 I0808 10:54:28.683823 6 log.go:172] (0xc0012e0960) (3) Data frame handling I0808 10:54:28.683838 6 log.go:172] (0xc0012e0960) (3) Data frame sent I0808 10:54:28.683853 6 log.go:172] (0xc0005ae790) Data frame received for 3 I0808 10:54:28.683861 6 log.go:172] (0xc0012e0960) (3) Data frame handling I0808 10:54:28.685675 6 log.go:172] (0xc0005ae790) Data frame received for 1 I0808 10:54:28.685700 6 log.go:172] (0xc001efba40) (1) Data frame handling I0808 10:54:28.685711 6 log.go:172] (0xc001efba40) (1) Data frame sent I0808 10:54:28.685721 6 log.go:172] (0xc0005ae790) (0xc001efba40) Stream removed, broadcasting: 1 I0808 10:54:28.685734 6 log.go:172] (0xc0005ae790) Go away received I0808 10:54:28.685953 6 log.go:172] (0xc0005ae790) (0xc001efba40) Stream removed, broadcasting: 1 I0808 10:54:28.685967 6 log.go:172] (0xc0005ae790) (0xc0012e0960) Stream removed, broadcasting: 3 I0808 10:54:28.685974 6 log.go:172] (0xc0005ae790) (0xc0012e0a00) Stream removed, broadcasting: 5 Aug 8 10:54:28.685: INFO: Exec stderr: "" Aug 8 10:54:28.686: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:28.686: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:28.843833 6 log.go:172] (0xc000e3ec60) (0xc001dad900) Create stream I0808 10:54:28.843866 6 log.go:172] (0xc000e3ec60) (0xc001dad900) Stream added, broadcasting: 1 I0808 10:54:28.847231 6 log.go:172] (0xc000e3ec60) Reply frame received for 1 I0808 10:54:28.847277 6 log.go:172] (0xc000e3ec60) (0xc001f400a0) Create stream I0808 10:54:28.847295 6 log.go:172] (0xc000e3ec60) (0xc001f400a0) Stream added, broadcasting: 3 I0808 10:54:28.848256 6 log.go:172] (0xc000e3ec60) Reply frame received for 3 I0808 10:54:28.848296 6 log.go:172] (0xc000e3ec60) (0xc001e8eb40) Create stream I0808 10:54:28.848306 6 log.go:172] (0xc000e3ec60) (0xc001e8eb40) Stream added, broadcasting: 5 I0808 10:54:28.849220 6 log.go:172] (0xc000e3ec60) Reply frame received for 5 I0808 10:54:28.932392 6 log.go:172] (0xc000e3ec60) Data frame received for 5 I0808 10:54:28.932445 6 log.go:172] (0xc000e3ec60) Data frame received for 3 I0808 10:54:28.932474 6 log.go:172] (0xc001f400a0) (3) Data frame handling I0808 10:54:28.932506 6 log.go:172] (0xc001f400a0) (3) Data frame sent I0808 10:54:28.932527 6 log.go:172] (0xc000e3ec60) Data frame received for 3 I0808 10:54:28.932540 6 log.go:172] (0xc001f400a0) (3) Data frame handling I0808 10:54:28.932558 6 log.go:172] (0xc001e8eb40) (5) Data frame handling I0808 10:54:28.933703 6 log.go:172] (0xc000e3ec60) Data frame received for 1 I0808 10:54:28.933724 6 log.go:172] (0xc001dad900) (1) Data frame handling I0808 10:54:28.933749 6 log.go:172] (0xc001dad900) (1) Data frame sent I0808 10:54:28.933911 6 log.go:172] (0xc000e3ec60) (0xc001dad900) Stream removed, broadcasting: 1 I0808 10:54:28.933982 6 log.go:172] (0xc000e3ec60) Go away received I0808 10:54:28.934025 6 log.go:172] (0xc000e3ec60) (0xc001dad900) Stream removed, broadcasting: 1 I0808 10:54:28.934054 6 log.go:172] (0xc000e3ec60) (0xc001f400a0) Stream removed, broadcasting: 3 I0808 10:54:28.934066 6 log.go:172] (0xc000e3ec60) (0xc001e8eb40) Stream removed, broadcasting: 5 Aug 8 10:54:28.934: INFO: Exec stderr: "" Aug 8 10:54:28.934: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:28.934: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:28.956921 6 log.go:172] (0xc001400790) (0xc0012e0be0) Create stream I0808 10:54:28.956969 6 log.go:172] (0xc001400790) (0xc0012e0be0) Stream added, broadcasting: 1 I0808 10:54:28.959306 6 log.go:172] (0xc001400790) Reply frame received for 1 I0808 10:54:28.959357 6 log.go:172] (0xc001400790) (0xc0012e0c80) Create stream I0808 10:54:28.959370 6 log.go:172] (0xc001400790) (0xc0012e0c80) Stream added, broadcasting: 3 I0808 10:54:28.960245 6 log.go:172] (0xc001400790) Reply frame received for 3 I0808 10:54:28.960298 6 log.go:172] (0xc001400790) (0xc001e8ebe0) Create stream I0808 10:54:28.960321 6 log.go:172] (0xc001400790) (0xc001e8ebe0) Stream added, broadcasting: 5 I0808 10:54:28.961496 6 log.go:172] (0xc001400790) Reply frame received for 5 I0808 10:54:29.030499 6 log.go:172] (0xc001400790) Data frame received for 5 I0808 10:54:29.030523 6 log.go:172] (0xc001e8ebe0) (5) Data frame handling I0808 10:54:29.030564 6 log.go:172] (0xc001400790) Data frame received for 3 I0808 10:54:29.030599 6 log.go:172] (0xc0012e0c80) (3) Data frame handling I0808 10:54:29.030620 6 log.go:172] (0xc0012e0c80) (3) Data frame sent I0808 10:54:29.030653 6 log.go:172] (0xc001400790) Data frame received for 3 I0808 10:54:29.030680 6 log.go:172] (0xc0012e0c80) (3) Data frame handling I0808 10:54:29.032198 6 log.go:172] (0xc001400790) Data frame received for 1 I0808 10:54:29.032220 6 log.go:172] (0xc0012e0be0) (1) Data frame handling I0808 10:54:29.032239 6 log.go:172] (0xc0012e0be0) (1) Data frame sent I0808 10:54:29.032485 6 log.go:172] (0xc001400790) (0xc0012e0be0) Stream removed, broadcasting: 1 I0808 10:54:29.032539 6 log.go:172] (0xc001400790) Go away received I0808 10:54:29.032659 6 log.go:172] (0xc001400790) (0xc0012e0be0) Stream removed, broadcasting: 1 I0808 10:54:29.032686 6 log.go:172] (0xc001400790) (0xc0012e0c80) Stream removed, broadcasting: 3 I0808 10:54:29.032698 6 log.go:172] (0xc001400790) (0xc001e8ebe0) Stream removed, broadcasting: 5 Aug 8 10:54:29.032: INFO: Exec stderr: "" Aug 8 10:54:29.032: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-thqdt PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 10:54:29.032: INFO: >>> kubeConfig: /root/.kube/config I0808 10:54:29.064145 6 log.go:172] (0xc001f3e2c0) (0xc001f40320) Create stream I0808 10:54:29.064170 6 log.go:172] (0xc001f3e2c0) (0xc001f40320) Stream added, broadcasting: 1 I0808 10:54:29.066883 6 log.go:172] (0xc001f3e2c0) Reply frame received for 1 I0808 10:54:29.066917 6 log.go:172] (0xc001f3e2c0) (0xc001f403c0) Create stream I0808 10:54:29.066928 6 log.go:172] (0xc001f3e2c0) (0xc001f403c0) Stream added, broadcasting: 3 I0808 10:54:29.067745 6 log.go:172] (0xc001f3e2c0) Reply frame received for 3 I0808 10:54:29.067808 6 log.go:172] (0xc001f3e2c0) (0xc001e8ec80) Create stream I0808 10:54:29.067827 6 log.go:172] (0xc001f3e2c0) (0xc001e8ec80) Stream added, broadcasting: 5 I0808 10:54:29.068893 6 log.go:172] (0xc001f3e2c0) Reply frame received for 5 I0808 10:54:29.135846 6 log.go:172] (0xc001f3e2c0) Data frame received for 5 I0808 10:54:29.135897 6 log.go:172] (0xc001e8ec80) (5) Data frame handling I0808 10:54:29.135937 6 log.go:172] (0xc001f3e2c0) Data frame received for 3 I0808 10:54:29.135955 6 log.go:172] (0xc001f403c0) (3) Data frame handling I0808 10:54:29.135975 6 log.go:172] (0xc001f403c0) (3) Data frame sent I0808 10:54:29.135987 6 log.go:172] (0xc001f3e2c0) Data frame received for 3 I0808 10:54:29.135999 6 log.go:172] (0xc001f403c0) (3) Data frame handling I0808 10:54:29.137793 6 log.go:172] (0xc001f3e2c0) Data frame received for 1 I0808 10:54:29.137868 6 log.go:172] (0xc001f40320) (1) Data frame handling I0808 10:54:29.137909 6 log.go:172] (0xc001f40320) (1) Data frame sent I0808 10:54:29.137937 6 log.go:172] (0xc001f3e2c0) (0xc001f40320) Stream removed, broadcasting: 1 I0808 10:54:29.137962 6 log.go:172] (0xc001f3e2c0) Go away received I0808 10:54:29.138020 6 log.go:172] (0xc001f3e2c0) (0xc001f40320) Stream removed, broadcasting: 1 I0808 10:54:29.138043 6 log.go:172] (0xc001f3e2c0) (0xc001f403c0) Stream removed, broadcasting: 3 I0808 10:54:29.138065 6 log.go:172] (0xc001f3e2c0) (0xc001e8ec80) Stream removed, broadcasting: 5 Aug 8 10:54:29.138: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:54:29.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-thqdt" for this suite. Aug 8 10:55:23.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:55:23.247: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-thqdt, resource: bindings, ignored listing per whitelist Aug 8 10:55:23.249: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-thqdt deletion completed in 54.106958262s • [SLOW TEST:67.458 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:55:23.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 8 10:55:23.424: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-a,UID:a8becdfb-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154029,Generation:0,CreationTimestamp:2020-08-08 10:55:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 8 10:55:23.424: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-a,UID:a8becdfb-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154029,Generation:0,CreationTimestamp:2020-08-08 10:55:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 8 10:55:33.431: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-a,UID:a8becdfb-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154077,Generation:0,CreationTimestamp:2020-08-08 10:55:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 8 10:55:33.431: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-a,UID:a8becdfb-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154077,Generation:0,CreationTimestamp:2020-08-08 10:55:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 8 10:55:43.437: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-a,UID:a8becdfb-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154124,Generation:0,CreationTimestamp:2020-08-08 10:55:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 8 10:55:43.437: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-a,UID:a8becdfb-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154124,Generation:0,CreationTimestamp:2020-08-08 10:55:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 8 10:55:53.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-a,UID:a8becdfb-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154144,Generation:0,CreationTimestamp:2020-08-08 10:55:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 8 10:55:53.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-a,UID:a8becdfb-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154144,Generation:0,CreationTimestamp:2020-08-08 10:55:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 8 10:56:03.450: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-b,UID:c099f059-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154164,Generation:0,CreationTimestamp:2020-08-08 10:56:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 8 10:56:03.450: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-b,UID:c099f059-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154164,Generation:0,CreationTimestamp:2020-08-08 10:56:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 8 10:56:13.455: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-b,UID:c099f059-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154185,Generation:0,CreationTimestamp:2020-08-08 10:56:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 8 10:56:13.456: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7rlq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rlq4/configmaps/e2e-watch-test-configmap-b,UID:c099f059-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154185,Generation:0,CreationTimestamp:2020-08-08 10:56:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:56:23.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-7rlq4" for this suite. Aug 8 10:56:29.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:56:29.554: INFO: namespace: e2e-tests-watch-7rlq4, resource: bindings, ignored listing per whitelist Aug 8 10:56:29.559: INFO: namespace e2e-tests-watch-7rlq4 deletion completed in 6.09900863s • [SLOW TEST:66.310 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:56:29.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 8 10:56:37.257: INFO: 0 pods remaining Aug 8 10:56:37.257: INFO: 0 pods has nil DeletionTimestamp Aug 8 10:56:37.257: INFO: Aug 8 10:56:37.906: INFO: 0 pods remaining Aug 8 10:56:37.906: INFO: 0 pods has nil DeletionTimestamp Aug 8 10:56:37.906: INFO: Aug 8 10:56:38.633: INFO: 0 pods remaining Aug 8 10:56:38.633: INFO: 0 pods has nil DeletionTimestamp Aug 8 10:56:38.633: INFO: STEP: Gathering metrics W0808 10:56:39.547294 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 8 10:56:39.547: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:56:39.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5ftwq" for this suite. Aug 8 10:56:47.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:56:47.647: INFO: namespace: e2e-tests-gc-5ftwq, resource: bindings, ignored listing per whitelist Aug 8 10:56:47.654: INFO: namespace e2e-tests-gc-5ftwq deletion completed in 8.104837669s • [SLOW TEST:18.094 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:56:47.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-db1a677c-d965-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 10:56:47.944: INFO: Waiting up to 5m0s for pod "pod-configmaps-db1b0152-d965-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-configmap-pgndv" to be "success or failure" Aug 8 10:56:48.232: INFO: Pod "pod-configmaps-db1b0152-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 288.114747ms Aug 8 10:56:50.249: INFO: Pod "pod-configmaps-db1b0152-d965-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305276352s Aug 8 10:56:52.254: INFO: Pod "pod-configmaps-db1b0152-d965-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.310612855s Aug 8 10:56:54.258: INFO: Pod "pod-configmaps-db1b0152-d965-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.314825343s STEP: Saw pod success Aug 8 10:56:54.258: INFO: Pod "pod-configmaps-db1b0152-d965-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:56:54.262: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-db1b0152-d965-11ea-aaa1-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 8 10:56:54.382: INFO: Waiting for pod pod-configmaps-db1b0152-d965-11ea-aaa1-0242ac11000c to disappear Aug 8 10:56:54.394: INFO: Pod pod-configmaps-db1b0152-d965-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:56:54.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pgndv" for this suite. Aug 8 10:57:00.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:57:00.439: INFO: namespace: e2e-tests-configmap-pgndv, resource: bindings, ignored listing per whitelist Aug 8 10:57:00.504: INFO: namespace e2e-tests-configmap-pgndv deletion completed in 6.107072122s • [SLOW TEST:12.850 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:57:00.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:57:07.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-5lrj7" for this suite. Aug 8 10:57:29.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:57:29.748: INFO: namespace: e2e-tests-replication-controller-5lrj7, resource: bindings, ignored listing per whitelist Aug 8 10:57:29.821: INFO: namespace e2e-tests-replication-controller-5lrj7 deletion completed in 22.09949472s • [SLOW TEST:29.316 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:57:29.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 10:57:29.963: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 8 10:57:34.968: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 8 10:57:34.968: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 8 10:57:36.971: INFO: Creating deployment "test-rollover-deployment" Aug 8 10:57:36.985: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 8 10:57:38.989: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 8 10:57:38.994: INFO: Ensure that both replica sets have 1 created replica Aug 8 10:57:38.999: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 8 10:57:39.004: INFO: Updating deployment test-rollover-deployment Aug 8 10:57:39.004: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 8 10:57:41.038: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 8 10:57:41.043: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 8 10:57:41.049: INFO: all replica sets need to contain the pod-template-hash label Aug 8 10:57:41.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481059, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481056, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 10:57:43.057: INFO: all replica sets need to contain the pod-template-hash label Aug 8 10:57:43.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481062, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481056, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 10:57:45.078: INFO: all replica sets need to contain the pod-template-hash label Aug 8 10:57:45.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481062, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481056, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 10:57:47.055: INFO: all replica sets need to contain the pod-template-hash label Aug 8 10:57:47.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481062, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481056, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 10:57:49.097: INFO: all replica sets need to contain the pod-template-hash label Aug 8 10:57:49.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481062, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481056, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 10:57:51.056: INFO: all replica sets need to contain the pod-template-hash label Aug 8 10:57:51.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481062, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481056, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 10:57:53.576: INFO: Aug 8 10:57:53.576: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 8 10:57:53.805: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-gq7qj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gq7qj/deployments/test-rollover-deployment,UID:f8590db2-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154818,Generation:2,CreationTimestamp:2020-08-08 10:57:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-08 10:57:37 +0000 UTC 2020-08-08 10:57:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-08 10:57:52 +0000 UTC 2020-08-08 10:57:36 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 8 10:57:53.808: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-gq7qj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gq7qj/replicasets/test-rollover-deployment-5b8479fdb6,UID:f98f310d-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154808,Generation:2,CreationTimestamp:2020-08-08 10:57:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f8590db2-d965-11ea-b2c9-0242ac120008 0xc0008ebda7 0xc0008ebda8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 8 10:57:53.808: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 8 10:57:53.809: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-gq7qj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gq7qj/replicasets/test-rollover-controller,UID:f422dab9-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154817,Generation:2,CreationTimestamp:2020-08-08 10:57:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f8590db2-d965-11ea-b2c9-0242ac120008 0xc0008eb627 0xc0008eb628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 8 10:57:53.809: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-gq7qj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gq7qj/replicasets/test-rollover-deployment-58494b7559,UID:f85bdb48-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154728,Generation:2,CreationTimestamp:2020-08-08 10:57:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f8590db2-d965-11ea-b2c9-0242ac120008 0xc0008eb907 0xc0008eb908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 8 10:57:53.812: INFO: Pod "test-rollover-deployment-5b8479fdb6-q29jx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-q29jx,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-gq7qj,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gq7qj/pods/test-rollover-deployment-5b8479fdb6-q29jx,UID:f99dc54f-d965-11ea-b2c9-0242ac120008,ResourceVersion:5154741,Generation:0,CreationTimestamp:2020-08-08 10:57:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 f98f310d-d965-11ea-b2c9-0242ac120008 0xc000ed91d7 0xc000ed91d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s8kw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s8kw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-s8kw4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ed9440} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ed9460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 10:57:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 10:57:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 10:57:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 10:57:39 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.204,StartTime:2020-08-08 10:57:39 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-08 10:57:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://a74be0b6bb6386f2b1275af6f083d5b9ad02b70456ee571d948814426200d806}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:57:53.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gq7qj" for this suite. Aug 8 10:58:01.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:58:01.972: INFO: namespace: e2e-tests-deployment-gq7qj, resource: bindings, ignored listing per whitelist Aug 8 10:58:01.974: INFO: namespace e2e-tests-deployment-gq7qj deletion completed in 8.15953155s • [SLOW TEST:32.153 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:58:01.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 8 10:58:02.176: INFO: Waiting up to 5m0s for pod "downward-api-075abbe5-d966-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-nwnd9" to be "success or failure" Aug 8 10:58:02.192: INFO: Pod "downward-api-075abbe5-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.54445ms Aug 8 10:58:04.336: INFO: Pod "downward-api-075abbe5-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160027329s Aug 8 10:58:06.473: INFO: Pod "downward-api-075abbe5-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296185008s Aug 8 10:58:08.477: INFO: Pod "downward-api-075abbe5-d966-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.300558895s STEP: Saw pod success Aug 8 10:58:08.477: INFO: Pod "downward-api-075abbe5-d966-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:58:08.479: INFO: Trying to get logs from node hunter-worker2 pod downward-api-075abbe5-d966-11ea-aaa1-0242ac11000c container dapi-container: STEP: delete the pod Aug 8 10:58:08.492: INFO: Waiting for pod downward-api-075abbe5-d966-11ea-aaa1-0242ac11000c to disappear Aug 8 10:58:08.612: INFO: Pod downward-api-075abbe5-d966-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:58:08.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nwnd9" for this suite. Aug 8 10:58:14.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:58:14.766: INFO: namespace: e2e-tests-downward-api-nwnd9, resource: bindings, ignored listing per whitelist Aug 8 10:58:14.773: INFO: namespace e2e-tests-downward-api-nwnd9 deletion completed in 6.15752582s • [SLOW TEST:12.799 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:58:14.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Aug 8 10:58:14.871: INFO: Waiting up to 5m0s for pod "client-containers-0eee1e1a-d966-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-containers-bnxz6" to be "success or failure" Aug 8 10:58:14.875: INFO: Pod "client-containers-0eee1e1a-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551633ms Aug 8 10:58:16.880: INFO: Pod "client-containers-0eee1e1a-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008600025s Aug 8 10:58:18.884: INFO: Pod "client-containers-0eee1e1a-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013052543s Aug 8 10:58:20.888: INFO: Pod "client-containers-0eee1e1a-d966-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016632171s STEP: Saw pod success Aug 8 10:58:20.888: INFO: Pod "client-containers-0eee1e1a-d966-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:58:20.891: INFO: Trying to get logs from node hunter-worker2 pod client-containers-0eee1e1a-d966-11ea-aaa1-0242ac11000c container test-container: STEP: delete the pod Aug 8 10:58:20.977: INFO: Waiting for pod client-containers-0eee1e1a-d966-11ea-aaa1-0242ac11000c to disappear Aug 8 10:58:21.045: INFO: Pod client-containers-0eee1e1a-d966-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:58:21.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-bnxz6" for this suite. Aug 8 10:58:27.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:58:27.101: INFO: namespace: e2e-tests-containers-bnxz6, resource: bindings, ignored listing per whitelist Aug 8 10:58:27.209: INFO: namespace e2e-tests-containers-bnxz6 deletion completed in 6.160226449s • [SLOW TEST:12.435 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:58:27.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 8 10:58:35.477: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 8 10:58:35.498: INFO: Pod pod-with-prestop-http-hook still exists Aug 8 10:58:37.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 8 10:58:37.503: INFO: Pod pod-with-prestop-http-hook still exists Aug 8 10:58:39.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 8 10:58:39.504: INFO: Pod pod-with-prestop-http-hook still exists Aug 8 10:58:41.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 8 10:58:41.503: INFO: Pod pod-with-prestop-http-hook still exists Aug 8 10:58:43.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 8 10:58:43.502: INFO: Pod pod-with-prestop-http-hook still exists Aug 8 10:58:45.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 8 10:58:45.503: INFO: Pod pod-with-prestop-http-hook still exists Aug 8 10:58:47.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 8 10:58:47.575: INFO: Pod pod-with-prestop-http-hook still exists Aug 8 10:58:49.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 8 10:58:49.502: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:58:49.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-f9ntc" for this suite. Aug 8 10:59:11.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:59:11.565: INFO: namespace: e2e-tests-container-lifecycle-hook-f9ntc, resource: bindings, ignored listing per whitelist Aug 8 10:59:11.598: INFO: namespace e2e-tests-container-lifecycle-hook-f9ntc deletion completed in 22.086970957s • [SLOW TEST:44.389 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:59:11.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-30cffcbe-d966-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 10:59:11.775: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-30d81f28-d966-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-bqvrz" to be "success or failure" Aug 8 10:59:11.780: INFO: Pod "pod-projected-configmaps-30d81f28-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016915ms Aug 8 10:59:13.783: INFO: Pod "pod-projected-configmaps-30d81f28-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007615191s Aug 8 10:59:15.802: INFO: Pod "pod-projected-configmaps-30d81f28-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026378901s Aug 8 10:59:17.838: INFO: Pod "pod-projected-configmaps-30d81f28-d966-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062673792s STEP: Saw pod success Aug 8 10:59:17.838: INFO: Pod "pod-projected-configmaps-30d81f28-d966-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 10:59:17.841: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-30d81f28-d966-11ea-aaa1-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 8 10:59:18.049: INFO: Waiting for pod pod-projected-configmaps-30d81f28-d966-11ea-aaa1-0242ac11000c to disappear Aug 8 10:59:18.071: INFO: Pod pod-projected-configmaps-30d81f28-d966-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 10:59:18.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bqvrz" for this suite. Aug 8 10:59:24.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 10:59:24.139: INFO: namespace: e2e-tests-projected-bqvrz, resource: bindings, ignored listing per whitelist Aug 8 10:59:24.161: INFO: namespace e2e-tests-projected-bqvrz deletion completed in 6.086147642s • [SLOW TEST:12.563 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 10:59:24.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-385a9fab-d966-11ea-aaa1-0242ac11000c STEP: Creating secret with name s-test-opt-upd-385aa017-d966-11ea-aaa1-0242ac11000c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-385a9fab-d966-11ea-aaa1-0242ac11000c STEP: Updating secret s-test-opt-upd-385aa017-d966-11ea-aaa1-0242ac11000c STEP: Creating secret with name s-test-opt-create-385aa050-d966-11ea-aaa1-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:01:00.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jn4np" for this suite. Aug 8 11:01:26.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:01:26.557: INFO: namespace: e2e-tests-projected-jn4np, resource: bindings, ignored listing per whitelist Aug 8 11:01:26.682: INFO: namespace e2e-tests-projected-jn4np deletion completed in 26.159041s • [SLOW TEST:122.521 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:01:26.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-815a9f93-d966-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:01:26.857: INFO: Waiting up to 5m0s for pod "pod-configmaps-815c71e5-d966-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-configmap-2cs2c" to be "success or failure" Aug 8 11:01:26.873: INFO: Pod "pod-configmaps-815c71e5-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.102455ms Aug 8 11:01:28.888: INFO: Pod "pod-configmaps-815c71e5-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030883645s Aug 8 11:01:30.936: INFO: Pod "pod-configmaps-815c71e5-d966-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078700313s STEP: Saw pod success Aug 8 11:01:30.936: INFO: Pod "pod-configmaps-815c71e5-d966-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:01:30.939: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-815c71e5-d966-11ea-aaa1-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 8 11:01:32.028: INFO: Waiting for pod pod-configmaps-815c71e5-d966-11ea-aaa1-0242ac11000c to disappear Aug 8 11:01:32.053: INFO: Pod pod-configmaps-815c71e5-d966-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:01:32.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2cs2c" for this suite. Aug 8 11:01:38.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:01:38.105: INFO: namespace: e2e-tests-configmap-2cs2c, resource: bindings, ignored listing per whitelist Aug 8 11:01:38.161: INFO: namespace e2e-tests-configmap-2cs2c deletion completed in 6.101433063s • [SLOW TEST:11.479 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:01:38.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:01:38.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8828893f-d966-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-jstfv" to be "success or failure" Aug 8 11:01:38.263: INFO: Pod "downwardapi-volume-8828893f-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.992009ms Aug 8 11:01:40.266: INFO: Pod "downwardapi-volume-8828893f-d966-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007436281s Aug 8 11:01:42.269: INFO: Pod "downwardapi-volume-8828893f-d966-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010416255s STEP: Saw pod success Aug 8 11:01:42.270: INFO: Pod "downwardapi-volume-8828893f-d966-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:01:42.272: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8828893f-d966-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:01:42.315: INFO: Waiting for pod downwardapi-volume-8828893f-d966-11ea-aaa1-0242ac11000c to disappear Aug 8 11:01:42.359: INFO: Pod downwardapi-volume-8828893f-d966-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:01:42.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jstfv" for this suite. Aug 8 11:01:48.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:01:48.421: INFO: namespace: e2e-tests-downward-api-jstfv, resource: bindings, ignored listing per whitelist Aug 8 11:01:48.453: INFO: namespace e2e-tests-downward-api-jstfv deletion completed in 6.09008802s • [SLOW TEST:10.292 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:01:48.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Aug 8 11:01:48.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:01:51.972: INFO: stderr: "" Aug 8 11:01:51.972: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 8 11:01:51.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:01:52.101: INFO: stderr: "" Aug 8 11:01:52.101: INFO: stdout: "update-demo-nautilus-8tk6r update-demo-nautilus-bmrhq " Aug 8 11:01:52.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tk6r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:01:52.211: INFO: stderr: "" Aug 8 11:01:52.211: INFO: stdout: "" Aug 8 11:01:52.211: INFO: update-demo-nautilus-8tk6r is created but not running Aug 8 11:01:57.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:01:57.354: INFO: stderr: "" Aug 8 11:01:57.355: INFO: stdout: "update-demo-nautilus-8tk6r update-demo-nautilus-bmrhq " Aug 8 11:01:57.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tk6r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:01:57.474: INFO: stderr: "" Aug 8 11:01:57.474: INFO: stdout: "" Aug 8 11:01:57.475: INFO: update-demo-nautilus-8tk6r is created but not running Aug 8 11:02:02.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:02:02.589: INFO: stderr: "" Aug 8 11:02:02.589: INFO: stdout: "update-demo-nautilus-8tk6r update-demo-nautilus-bmrhq " Aug 8 11:02:02.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tk6r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:02:02.692: INFO: stderr: "" Aug 8 11:02:02.692: INFO: stdout: "true" Aug 8 11:02:02.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tk6r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:02:02.793: INFO: stderr: "" Aug 8 11:02:02.793: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 8 11:02:02.793: INFO: validating pod update-demo-nautilus-8tk6r Aug 8 11:02:02.797: INFO: got data: { "image": "nautilus.jpg" } Aug 8 11:02:02.797: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 8 11:02:02.797: INFO: update-demo-nautilus-8tk6r is verified up and running Aug 8 11:02:02.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmrhq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:02:02.891: INFO: stderr: "" Aug 8 11:02:02.891: INFO: stdout: "true" Aug 8 11:02:02.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmrhq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:02:03.014: INFO: stderr: "" Aug 8 11:02:03.014: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 8 11:02:03.014: INFO: validating pod update-demo-nautilus-bmrhq Aug 8 11:02:03.019: INFO: got data: { "image": "nautilus.jpg" } Aug 8 11:02:03.019: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 8 11:02:03.019: INFO: update-demo-nautilus-bmrhq is verified up and running STEP: using delete to clean up resources Aug 8 11:02:03.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:02:03.118: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 8 11:02:03.118: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 8 11:02:03.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-wp7bd' Aug 8 11:02:03.222: INFO: stderr: "No resources found.\n" Aug 8 11:02:03.222: INFO: stdout: "" Aug 8 11:02:03.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-wp7bd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 8 11:02:03.622: INFO: stderr: "" Aug 8 11:02:03.622: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:02:03.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wp7bd" for this suite. Aug 8 11:02:09.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:02:09.900: INFO: namespace: e2e-tests-kubectl-wp7bd, resource: bindings, ignored listing per whitelist Aug 8 11:02:09.947: INFO: namespace e2e-tests-kubectl-wp7bd deletion completed in 6.320829391s • [SLOW TEST:21.493 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:02:09.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 8 11:02:10.112: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-94bjm,SelfLink:/api/v1/namespaces/e2e-tests-watch-94bjm/configmaps/e2e-watch-test-watch-closed,UID:9b201c1b-d966-11ea-b2c9-0242ac120008,ResourceVersion:5156060,Generation:0,CreationTimestamp:2020-08-08 11:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 8 11:02:10.113: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-94bjm,SelfLink:/api/v1/namespaces/e2e-tests-watch-94bjm/configmaps/e2e-watch-test-watch-closed,UID:9b201c1b-d966-11ea-b2c9-0242ac120008,ResourceVersion:5156061,Generation:0,CreationTimestamp:2020-08-08 11:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 8 11:02:10.175: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-94bjm,SelfLink:/api/v1/namespaces/e2e-tests-watch-94bjm/configmaps/e2e-watch-test-watch-closed,UID:9b201c1b-d966-11ea-b2c9-0242ac120008,ResourceVersion:5156062,Generation:0,CreationTimestamp:2020-08-08 11:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 8 11:02:10.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-94bjm,SelfLink:/api/v1/namespaces/e2e-tests-watch-94bjm/configmaps/e2e-watch-test-watch-closed,UID:9b201c1b-d966-11ea-b2c9-0242ac120008,ResourceVersion:5156064,Generation:0,CreationTimestamp:2020-08-08 11:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:02:10.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-94bjm" for this suite. Aug 8 11:02:16.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:02:16.321: INFO: namespace: e2e-tests-watch-94bjm, resource: bindings, ignored listing per whitelist Aug 8 11:02:16.335: INFO: namespace e2e-tests-watch-94bjm deletion completed in 6.140495358s • [SLOW TEST:6.388 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:02:16.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0808 11:02:17.519289 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 8 11:02:17.519: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:02:17.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-b545d" for this suite. Aug 8 11:02:23.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:02:23.752: INFO: namespace: e2e-tests-gc-b545d, resource: bindings, ignored listing per whitelist Aug 8 11:02:23.761: INFO: namespace e2e-tests-gc-b545d deletion completed in 6.240187065s • [SLOW TEST:7.426 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:02:23.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 8 11:02:24.602: INFO: Pod name wrapped-volume-race-a3c4e94f-d966-11ea-aaa1-0242ac11000c: Found 0 pods out of 5 Aug 8 11:02:29.611: INFO: Pod name wrapped-volume-race-a3c4e94f-d966-11ea-aaa1-0242ac11000c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a3c4e94f-d966-11ea-aaa1-0242ac11000c in namespace e2e-tests-emptydir-wrapper-kqnpx, will wait for the garbage collector to delete the pods Aug 8 11:04:31.789: INFO: Deleting ReplicationController wrapped-volume-race-a3c4e94f-d966-11ea-aaa1-0242ac11000c took: 89.521015ms Aug 8 11:04:31.889: INFO: Terminating ReplicationController wrapped-volume-race-a3c4e94f-d966-11ea-aaa1-0242ac11000c pods took: 100.230968ms STEP: Creating RC which spawns configmap-volume pods Aug 8 11:05:18.234: INFO: Pod name wrapped-volume-race-0b31e950-d967-11ea-aaa1-0242ac11000c: Found 0 pods out of 5 Aug 8 11:05:23.247: INFO: Pod name wrapped-volume-race-0b31e950-d967-11ea-aaa1-0242ac11000c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0b31e950-d967-11ea-aaa1-0242ac11000c in namespace e2e-tests-emptydir-wrapper-kqnpx, will wait for the garbage collector to delete the pods Aug 8 11:07:59.328: INFO: Deleting ReplicationController wrapped-volume-race-0b31e950-d967-11ea-aaa1-0242ac11000c took: 7.524095ms Aug 8 11:07:59.429: INFO: Terminating ReplicationController wrapped-volume-race-0b31e950-d967-11ea-aaa1-0242ac11000c pods took: 100.227317ms STEP: Creating RC which spawns configmap-volume pods Aug 8 11:08:38.804: INFO: Pod name wrapped-volume-race-82b9c5c7-d967-11ea-aaa1-0242ac11000c: Found 0 pods out of 5 Aug 8 11:08:43.813: INFO: Pod name wrapped-volume-race-82b9c5c7-d967-11ea-aaa1-0242ac11000c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-82b9c5c7-d967-11ea-aaa1-0242ac11000c in namespace e2e-tests-emptydir-wrapper-kqnpx, will wait for the garbage collector to delete the pods Aug 8 11:11:22.213: INFO: Deleting ReplicationController wrapped-volume-race-82b9c5c7-d967-11ea-aaa1-0242ac11000c took: 7.751035ms Aug 8 11:11:22.313: INFO: Terminating ReplicationController wrapped-volume-race-82b9c5c7-d967-11ea-aaa1-0242ac11000c pods took: 100.211132ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:12:08.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-kqnpx" for this suite. Aug 8 11:12:24.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:12:24.632: INFO: namespace: e2e-tests-emptydir-wrapper-kqnpx, resource: bindings, ignored listing per whitelist Aug 8 11:12:24.681: INFO: namespace e2e-tests-emptydir-wrapper-kqnpx deletion completed in 16.107400417s • [SLOW TEST:600.920 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:12:24.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-09866046-d968-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:12:24.807: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-zjdcn" to be "success or failure" Aug 8 11:12:24.819: INFO: Pod "pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.058793ms Aug 8 11:12:26.822: INFO: Pod "pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015133665s Aug 8 11:12:28.826: INFO: Pod "pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018847499s Aug 8 11:12:30.830: INFO: Pod "pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023114364s Aug 8 11:12:32.834: INFO: Pod "pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.027240079s STEP: Saw pod success Aug 8 11:12:32.834: INFO: Pod "pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:12:32.836: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 8 11:12:32.863: INFO: Waiting for pod pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c to disappear Aug 8 11:12:32.873: INFO: Pod pod-projected-configmaps-0986fc6f-d968-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:12:32.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zjdcn" for this suite. Aug 8 11:12:38.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:12:38.930: INFO: namespace: e2e-tests-projected-zjdcn, resource: bindings, ignored listing per whitelist Aug 8 11:12:38.990: INFO: namespace e2e-tests-projected-zjdcn deletion completed in 6.114257545s • [SLOW TEST:14.309 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:12:38.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 8 11:12:43.978: INFO: Successfully updated pod "labelsupdate122dd68b-d968-11ea-aaa1-0242ac11000c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:12:46.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-n8c52" for this suite. Aug 8 11:13:08.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:13:08.296: INFO: namespace: e2e-tests-downward-api-n8c52, resource: bindings, ignored listing per whitelist Aug 8 11:13:08.338: INFO: namespace e2e-tests-downward-api-n8c52 deletion completed in 22.107987653s • [SLOW TEST:29.348 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:13:08.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Aug 8 11:13:12.514: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-238a27c0-d968-11ea-aaa1-0242ac11000c", GenerateName:"", Namespace:"e2e-tests-pods-qf77n", SelfLink:"/api/v1/namespaces/e2e-tests-pods-qf77n/pods/pod-submit-remove-238a27c0-d968-11ea-aaa1-0242ac11000c", UID:"238b79a6-d968-11ea-b2c9-0242ac120008", ResourceVersion:"5158285", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732481988, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"430448245"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nkpll", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0010493c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nkpll", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ddbf28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0019a1560), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ddbf70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ddbf90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001ddbf98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ddbf9c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481988, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481992, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481992, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732481988, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.1.222", StartTime:(*v1.Time)(0xc000b95200), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000b95220), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://2e99cd09eeebac72eb1114550d10374efe425b022e1c349a25d78a25a82d7574"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:13:27.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qf77n" for this suite. Aug 8 11:13:35.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:13:35.533: INFO: namespace: e2e-tests-pods-qf77n, resource: bindings, ignored listing per whitelist Aug 8 11:13:35.533: INFO: namespace e2e-tests-pods-qf77n deletion completed in 8.082715358s • [SLOW TEST:27.194 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:13:35.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Aug 8 11:13:35.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-2dqfz run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 8 11:13:42.600: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0808 11:13:42.527821 434 log.go:172] (0xc0008380b0) (0xc000862500) Create stream\nI0808 11:13:42.527853 434 log.go:172] (0xc0008380b0) (0xc000862500) Stream added, broadcasting: 1\nI0808 11:13:42.531507 434 log.go:172] (0xc0008380b0) Reply frame received for 1\nI0808 11:13:42.531565 434 log.go:172] (0xc0008380b0) (0xc000654960) Create stream\nI0808 11:13:42.531596 434 log.go:172] (0xc0008380b0) (0xc000654960) Stream added, broadcasting: 3\nI0808 11:13:42.532460 434 log.go:172] (0xc0008380b0) Reply frame received for 3\nI0808 11:13:42.532496 434 log.go:172] (0xc0008380b0) (0xc000a240a0) Create stream\nI0808 11:13:42.532507 434 log.go:172] (0xc0008380b0) (0xc000a240a0) Stream added, broadcasting: 5\nI0808 11:13:42.533444 434 log.go:172] (0xc0008380b0) Reply frame received for 5\nI0808 11:13:42.533471 434 log.go:172] (0xc0008380b0) (0xc0008620a0) Create stream\nI0808 11:13:42.533479 434 log.go:172] (0xc0008380b0) (0xc0008620a0) Stream added, broadcasting: 7\nI0808 11:13:42.534404 434 log.go:172] (0xc0008380b0) Reply frame received for 7\nI0808 11:13:42.534553 434 log.go:172] (0xc000654960) (3) Writing data frame\nI0808 11:13:42.534662 434 log.go:172] (0xc000654960) (3) Writing data frame\nI0808 11:13:42.535343 434 log.go:172] (0xc0008380b0) Data frame received for 5\nI0808 11:13:42.535366 434 log.go:172] (0xc000a240a0) (5) Data frame handling\nI0808 11:13:42.535378 434 log.go:172] (0xc000a240a0) (5) Data frame sent\nI0808 11:13:42.535907 434 log.go:172] (0xc0008380b0) Data frame received for 5\nI0808 11:13:42.535923 434 log.go:172] (0xc000a240a0) (5) Data frame handling\nI0808 11:13:42.535935 434 log.go:172] (0xc000a240a0) (5) Data frame sent\nI0808 11:13:42.580219 434 log.go:172] (0xc0008380b0) Data frame received for 5\nI0808 11:13:42.580264 434 log.go:172] (0xc000a240a0) (5) Data frame handling\nI0808 11:13:42.580346 434 log.go:172] (0xc0008380b0) Data frame received for 7\nI0808 11:13:42.580385 434 log.go:172] (0xc0008620a0) (7) Data frame handling\nI0808 11:13:42.580616 434 log.go:172] (0xc0008380b0) Data frame received for 1\nI0808 11:13:42.580683 434 log.go:172] (0xc0008380b0) (0xc000654960) Stream removed, broadcasting: 3\nI0808 11:13:42.580903 434 log.go:172] (0xc000862500) (1) Data frame handling\nI0808 11:13:42.580933 434 log.go:172] (0xc000862500) (1) Data frame sent\nI0808 11:13:42.580956 434 log.go:172] (0xc0008380b0) (0xc000862500) Stream removed, broadcasting: 1\nI0808 11:13:42.580985 434 log.go:172] (0xc0008380b0) Go away received\nI0808 11:13:42.581129 434 log.go:172] (0xc0008380b0) (0xc000862500) Stream removed, broadcasting: 1\nI0808 11:13:42.581165 434 log.go:172] (0xc0008380b0) (0xc000654960) Stream removed, broadcasting: 3\nI0808 11:13:42.581182 434 log.go:172] (0xc0008380b0) (0xc000a240a0) Stream removed, broadcasting: 5\nI0808 11:13:42.581213 434 log.go:172] (0xc0008380b0) (0xc0008620a0) Stream removed, broadcasting: 7\n" Aug 8 11:13:42.600: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:13:44.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2dqfz" for this suite. Aug 8 11:13:50.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:13:50.684: INFO: namespace: e2e-tests-kubectl-2dqfz, resource: bindings, ignored listing per whitelist Aug 8 11:13:50.698: INFO: namespace e2e-tests-kubectl-2dqfz deletion completed in 6.090595862s • [SLOW TEST:15.165 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:13:50.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:13:50.853: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3cd27768-d968-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-xs5sf" to be "success or failure" Aug 8 11:13:51.007: INFO: Pod "downwardapi-volume-3cd27768-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 153.721602ms Aug 8 11:13:53.012: INFO: Pod "downwardapi-volume-3cd27768-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158313718s Aug 8 11:13:55.016: INFO: Pod "downwardapi-volume-3cd27768-d968-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.162861427s Aug 8 11:13:57.021: INFO: Pod "downwardapi-volume-3cd27768-d968-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167414438s STEP: Saw pod success Aug 8 11:13:57.021: INFO: Pod "downwardapi-volume-3cd27768-d968-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:13:57.024: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3cd27768-d968-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:13:57.061: INFO: Waiting for pod downwardapi-volume-3cd27768-d968-11ea-aaa1-0242ac11000c to disappear Aug 8 11:13:57.072: INFO: Pod downwardapi-volume-3cd27768-d968-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:13:57.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xs5sf" for this suite. Aug 8 11:14:03.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:14:03.142: INFO: namespace: e2e-tests-projected-xs5sf, resource: bindings, ignored listing per whitelist Aug 8 11:14:03.151: INFO: namespace e2e-tests-projected-xs5sf deletion completed in 6.075018818s • [SLOW TEST:12.452 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:14:03.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 8 11:14:07.844: INFO: Successfully updated pod "pod-update-activedeadlineseconds-443fce76-d968-11ea-aaa1-0242ac11000c" Aug 8 11:14:07.844: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-443fce76-d968-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-pods-8ctnh" to be "terminated due to deadline exceeded" Aug 8 11:14:07.905: INFO: Pod "pod-update-activedeadlineseconds-443fce76-d968-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 60.942157ms Aug 8 11:14:09.908: INFO: Pod "pod-update-activedeadlineseconds-443fce76-d968-11ea-aaa1-0242ac11000c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.064467551s Aug 8 11:14:09.908: INFO: Pod "pod-update-activedeadlineseconds-443fce76-d968-11ea-aaa1-0242ac11000c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:14:09.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8ctnh" for this suite. Aug 8 11:14:15.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:14:16.303: INFO: namespace: e2e-tests-pods-8ctnh, resource: bindings, ignored listing per whitelist Aug 8 11:14:16.335: INFO: namespace e2e-tests-pods-8ctnh deletion completed in 6.423855686s • [SLOW TEST:13.184 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:14:16.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 8 11:14:16.402: INFO: Waiting up to 5m0s for pod "pod-4c0cdfd6-d968-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-76m7d" to be "success or failure" Aug 8 11:14:16.417: INFO: Pod "pod-4c0cdfd6-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.616301ms Aug 8 11:14:18.426: INFO: Pod "pod-4c0cdfd6-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02405087s Aug 8 11:14:20.429: INFO: Pod "pod-4c0cdfd6-d968-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027577743s STEP: Saw pod success Aug 8 11:14:20.430: INFO: Pod "pod-4c0cdfd6-d968-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:14:20.432: INFO: Trying to get logs from node hunter-worker2 pod pod-4c0cdfd6-d968-11ea-aaa1-0242ac11000c container test-container: STEP: delete the pod Aug 8 11:14:20.476: INFO: Waiting for pod pod-4c0cdfd6-d968-11ea-aaa1-0242ac11000c to disappear Aug 8 11:14:20.575: INFO: Pod pod-4c0cdfd6-d968-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:14:20.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-76m7d" for this suite. Aug 8 11:14:26.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:14:26.650: INFO: namespace: e2e-tests-emptydir-76m7d, resource: bindings, ignored listing per whitelist Aug 8 11:14:26.705: INFO: namespace e2e-tests-emptydir-76m7d deletion completed in 6.092040464s • [SLOW TEST:10.369 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:14:26.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 8 11:14:26.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z5fqp' Aug 8 11:14:26.925: INFO: stderr: "" Aug 8 11:14:26.925: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Aug 8 11:14:31.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z5fqp -o json' Aug 8 11:14:32.081: INFO: stderr: "" Aug 8 11:14:32.081: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-08T11:14:26Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-z5fqp\",\n \"resourceVersion\": \"5158569\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-z5fqp/pods/e2e-test-nginx-pod\",\n \"uid\": \"5250ebe7-d968-11ea-b2c9-0242ac120008\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9hrc8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9hrc8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9hrc8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-08T11:14:26Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-08T11:14:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-08T11:14:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-08T11:14:26Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3b5efe9f0ff02657b2a4d1770a42d7b9bf3939d3cb1443aa3a2647c31706a9bb\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-08T11:14:29Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.61\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-08T11:14:26Z\"\n }\n}\n" STEP: replace the image in the pod Aug 8 11:14:32.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-z5fqp' Aug 8 11:14:32.374: INFO: stderr: "" Aug 8 11:14:32.374: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Aug 8 11:14:32.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z5fqp' Aug 8 11:14:35.656: INFO: stderr: "" Aug 8 11:14:35.656: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:14:35.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z5fqp" for this suite. Aug 8 11:14:41.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:14:41.711: INFO: namespace: e2e-tests-kubectl-z5fqp, resource: bindings, ignored listing per whitelist Aug 8 11:14:41.755: INFO: namespace e2e-tests-kubectl-z5fqp deletion completed in 6.089345332s • [SLOW TEST:15.050 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:14:41.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-7xfb8/configmap-test-5b39e538-d968-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:14:41.905: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b406b71-d968-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-configmap-7xfb8" to be "success or failure" Aug 8 11:14:41.936: INFO: Pod "pod-configmaps-5b406b71-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.484572ms Aug 8 11:14:44.032: INFO: Pod "pod-configmaps-5b406b71-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126467937s Aug 8 11:14:46.109: INFO: Pod "pod-configmaps-5b406b71-d968-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.203932244s STEP: Saw pod success Aug 8 11:14:46.109: INFO: Pod "pod-configmaps-5b406b71-d968-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:14:46.113: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-5b406b71-d968-11ea-aaa1-0242ac11000c container env-test: STEP: delete the pod Aug 8 11:14:46.342: INFO: Waiting for pod pod-configmaps-5b406b71-d968-11ea-aaa1-0242ac11000c to disappear Aug 8 11:14:46.371: INFO: Pod pod-configmaps-5b406b71-d968-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:14:46.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7xfb8" for this suite. Aug 8 11:14:52.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:14:52.550: INFO: namespace: e2e-tests-configmap-7xfb8, resource: bindings, ignored listing per whitelist Aug 8 11:14:52.558: INFO: namespace e2e-tests-configmap-7xfb8 deletion completed in 6.18336769s • [SLOW TEST:10.803 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:14:52.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-52plq [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-52plq STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-52plq Aug 8 11:14:52.714: INFO: Found 0 stateful pods, waiting for 1 Aug 8 11:15:02.719: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 8 11:15:02.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-52plq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 11:15:03.020: INFO: stderr: "I0808 11:15:02.861312 550 log.go:172] (0xc0007642c0) (0xc0005a2640) Create stream\nI0808 11:15:02.861383 550 log.go:172] (0xc0007642c0) (0xc0005a2640) Stream added, broadcasting: 1\nI0808 11:15:02.864208 550 log.go:172] (0xc0007642c0) Reply frame received for 1\nI0808 11:15:02.864269 550 log.go:172] (0xc0007642c0) (0xc0004dcb40) Create stream\nI0808 11:15:02.864293 550 log.go:172] (0xc0007642c0) (0xc0004dcb40) Stream added, broadcasting: 3\nI0808 11:15:02.865609 550 log.go:172] (0xc0007642c0) Reply frame received for 3\nI0808 11:15:02.865643 550 log.go:172] (0xc0007642c0) (0xc0005a26e0) Create stream\nI0808 11:15:02.865668 550 log.go:172] (0xc0007642c0) (0xc0005a26e0) Stream added, broadcasting: 5\nI0808 11:15:02.866784 550 log.go:172] (0xc0007642c0) Reply frame received for 5\nI0808 11:15:03.012460 550 log.go:172] (0xc0007642c0) Data frame received for 5\nI0808 11:15:03.012505 550 log.go:172] (0xc0005a26e0) (5) Data frame handling\nI0808 11:15:03.012534 550 log.go:172] (0xc0007642c0) Data frame received for 3\nI0808 11:15:03.012572 550 log.go:172] (0xc0004dcb40) (3) Data frame handling\nI0808 11:15:03.012584 550 log.go:172] (0xc0004dcb40) (3) Data frame sent\nI0808 11:15:03.013054 550 log.go:172] (0xc0007642c0) Data frame received for 3\nI0808 11:15:03.013090 550 log.go:172] (0xc0004dcb40) (3) Data frame handling\nI0808 11:15:03.014940 550 log.go:172] (0xc0007642c0) Data frame received for 1\nI0808 11:15:03.014959 550 log.go:172] (0xc0005a2640) (1) Data frame handling\nI0808 11:15:03.014975 550 log.go:172] (0xc0005a2640) (1) Data frame sent\nI0808 11:15:03.014996 550 log.go:172] (0xc0007642c0) (0xc0005a2640) Stream removed, broadcasting: 1\nI0808 11:15:03.015029 550 log.go:172] (0xc0007642c0) Go away received\nI0808 11:15:03.015164 550 log.go:172] (0xc0007642c0) (0xc0005a2640) Stream removed, broadcasting: 1\nI0808 11:15:03.015174 550 log.go:172] (0xc0007642c0) (0xc0004dcb40) Stream removed, broadcasting: 3\nI0808 11:15:03.015179 550 log.go:172] (0xc0007642c0) (0xc0005a26e0) Stream removed, broadcasting: 5\n" Aug 8 11:15:03.020: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 11:15:03.020: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 8 11:15:03.023: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 8 11:15:13.037: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 8 11:15:13.037: INFO: Waiting for statefulset status.replicas updated to 0 Aug 8 11:15:13.099: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999559s Aug 8 11:15:14.104: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.986340209s Aug 8 11:15:15.108: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.980985126s Aug 8 11:15:16.112: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.977193555s Aug 8 11:15:17.116: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.973273531s Aug 8 11:15:18.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.969434582s Aug 8 11:15:19.127: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.964424943s Aug 8 11:15:20.132: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.958075588s Aug 8 11:15:21.137: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.952985471s Aug 8 11:15:22.142: INFO: Verifying statefulset ss doesn't scale past 1 for another 948.235098ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-52plq Aug 8 11:15:23.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-52plq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 8 11:15:23.377: INFO: stderr: "I0808 11:15:23.291573 572 log.go:172] (0xc000154840) (0xc000750640) Create stream\nI0808 11:15:23.291658 572 log.go:172] (0xc000154840) (0xc000750640) Stream added, broadcasting: 1\nI0808 11:15:23.294176 572 log.go:172] (0xc000154840) Reply frame received for 1\nI0808 11:15:23.294240 572 log.go:172] (0xc000154840) (0xc0007506e0) Create stream\nI0808 11:15:23.294257 572 log.go:172] (0xc000154840) (0xc0007506e0) Stream added, broadcasting: 3\nI0808 11:15:23.295170 572 log.go:172] (0xc000154840) Reply frame received for 3\nI0808 11:15:23.295204 572 log.go:172] (0xc000154840) (0xc000750780) Create stream\nI0808 11:15:23.295217 572 log.go:172] (0xc000154840) (0xc000750780) Stream added, broadcasting: 5\nI0808 11:15:23.296354 572 log.go:172] (0xc000154840) Reply frame received for 5\nI0808 11:15:23.371026 572 log.go:172] (0xc000154840) Data frame received for 3\nI0808 11:15:23.371051 572 log.go:172] (0xc0007506e0) (3) Data frame handling\nI0808 11:15:23.371064 572 log.go:172] (0xc0007506e0) (3) Data frame sent\nI0808 11:15:23.371073 572 log.go:172] (0xc000154840) Data frame received for 3\nI0808 11:15:23.371078 572 log.go:172] (0xc0007506e0) (3) Data frame handling\nI0808 11:15:23.371265 572 log.go:172] (0xc000154840) Data frame received for 5\nI0808 11:15:23.371287 572 log.go:172] (0xc000750780) (5) Data frame handling\nI0808 11:15:23.372700 572 log.go:172] (0xc000154840) Data frame received for 1\nI0808 11:15:23.372718 572 log.go:172] (0xc000750640) (1) Data frame handling\nI0808 11:15:23.372791 572 log.go:172] (0xc000750640) (1) Data frame sent\nI0808 11:15:23.372805 572 log.go:172] (0xc000154840) (0xc000750640) Stream removed, broadcasting: 1\nI0808 11:15:23.372905 572 log.go:172] (0xc000154840) Go away received\nI0808 11:15:23.372957 572 log.go:172] (0xc000154840) (0xc000750640) Stream removed, broadcasting: 1\nI0808 11:15:23.372976 572 log.go:172] (0xc000154840) (0xc0007506e0) Stream removed, broadcasting: 3\nI0808 11:15:23.372984 572 log.go:172] (0xc000154840) (0xc000750780) Stream removed, broadcasting: 5\n" Aug 8 11:15:23.377: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 8 11:15:23.377: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 8 11:15:23.380: INFO: Found 1 stateful pods, waiting for 3 Aug 8 11:15:33.386: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:15:33.386: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:15:33.386: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 8 11:15:33.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-52plq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 11:15:33.639: INFO: stderr: "I0808 11:15:33.560415 594 log.go:172] (0xc00015c790) (0xc00077f540) Create stream\nI0808 11:15:33.560469 594 log.go:172] (0xc00015c790) (0xc00077f540) Stream added, broadcasting: 1\nI0808 11:15:33.562495 594 log.go:172] (0xc00015c790) Reply frame received for 1\nI0808 11:15:33.562529 594 log.go:172] (0xc00015c790) (0xc0002d0000) Create stream\nI0808 11:15:33.562540 594 log.go:172] (0xc00015c790) (0xc0002d0000) Stream added, broadcasting: 3\nI0808 11:15:33.563198 594 log.go:172] (0xc00015c790) Reply frame received for 3\nI0808 11:15:33.563231 594 log.go:172] (0xc00015c790) (0xc00077f5e0) Create stream\nI0808 11:15:33.563246 594 log.go:172] (0xc00015c790) (0xc00077f5e0) Stream added, broadcasting: 5\nI0808 11:15:33.564009 594 log.go:172] (0xc00015c790) Reply frame received for 5\nI0808 11:15:33.631970 594 log.go:172] (0xc00015c790) Data frame received for 5\nI0808 11:15:33.632024 594 log.go:172] (0xc00077f5e0) (5) Data frame handling\nI0808 11:15:33.632058 594 log.go:172] (0xc00015c790) Data frame received for 3\nI0808 11:15:33.632079 594 log.go:172] (0xc0002d0000) (3) Data frame handling\nI0808 11:15:33.632106 594 log.go:172] (0xc0002d0000) (3) Data frame sent\nI0808 11:15:33.632125 594 log.go:172] (0xc00015c790) Data frame received for 3\nI0808 11:15:33.632143 594 log.go:172] (0xc0002d0000) (3) Data frame handling\nI0808 11:15:33.633852 594 log.go:172] (0xc00015c790) Data frame received for 1\nI0808 11:15:33.633888 594 log.go:172] (0xc00077f540) (1) Data frame handling\nI0808 11:15:33.633921 594 log.go:172] (0xc00077f540) (1) Data frame sent\nI0808 11:15:33.633942 594 log.go:172] (0xc00015c790) (0xc00077f540) Stream removed, broadcasting: 1\nI0808 11:15:33.633972 594 log.go:172] (0xc00015c790) Go away received\nI0808 11:15:33.634368 594 log.go:172] (0xc00015c790) (0xc00077f540) Stream removed, broadcasting: 1\nI0808 11:15:33.634405 594 log.go:172] (0xc00015c790) (0xc0002d0000) Stream removed, broadcasting: 3\nI0808 11:15:33.634417 594 log.go:172] (0xc00015c790) (0xc00077f5e0) Stream removed, broadcasting: 5\n" Aug 8 11:15:33.639: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 11:15:33.639: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 8 11:15:33.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-52plq ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 11:15:33.866: INFO: stderr: "I0808 11:15:33.760618 617 log.go:172] (0xc0008322c0) (0xc00072c640) Create stream\nI0808 11:15:33.760690 617 log.go:172] (0xc0008322c0) (0xc00072c640) Stream added, broadcasting: 1\nI0808 11:15:33.763509 617 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0808 11:15:33.763574 617 log.go:172] (0xc0008322c0) (0xc000684c80) Create stream\nI0808 11:15:33.763591 617 log.go:172] (0xc0008322c0) (0xc000684c80) Stream added, broadcasting: 3\nI0808 11:15:33.764894 617 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0808 11:15:33.764952 617 log.go:172] (0xc0008322c0) (0xc0006ec000) Create stream\nI0808 11:15:33.764968 617 log.go:172] (0xc0008322c0) (0xc0006ec000) Stream added, broadcasting: 5\nI0808 11:15:33.765898 617 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0808 11:15:33.858584 617 log.go:172] (0xc0008322c0) Data frame received for 3\nI0808 11:15:33.858638 617 log.go:172] (0xc000684c80) (3) Data frame handling\nI0808 11:15:33.858663 617 log.go:172] (0xc000684c80) (3) Data frame sent\nI0808 11:15:33.858679 617 log.go:172] (0xc0008322c0) Data frame received for 3\nI0808 11:15:33.858693 617 log.go:172] (0xc000684c80) (3) Data frame handling\nI0808 11:15:33.858799 617 log.go:172] (0xc0008322c0) Data frame received for 5\nI0808 11:15:33.858838 617 log.go:172] (0xc0006ec000) (5) Data frame handling\nI0808 11:15:33.861121 617 log.go:172] (0xc0008322c0) Data frame received for 1\nI0808 11:15:33.861156 617 log.go:172] (0xc00072c640) (1) Data frame handling\nI0808 11:15:33.861186 617 log.go:172] (0xc00072c640) (1) Data frame sent\nI0808 11:15:33.861338 617 log.go:172] (0xc0008322c0) (0xc00072c640) Stream removed, broadcasting: 1\nI0808 11:15:33.861390 617 log.go:172] (0xc0008322c0) Go away received\nI0808 11:15:33.861587 617 log.go:172] (0xc0008322c0) (0xc00072c640) Stream removed, broadcasting: 1\nI0808 11:15:33.861605 617 log.go:172] (0xc0008322c0) (0xc000684c80) Stream removed, broadcasting: 3\nI0808 11:15:33.861615 617 log.go:172] (0xc0008322c0) (0xc0006ec000) Stream removed, broadcasting: 5\n" Aug 8 11:15:33.866: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 11:15:33.866: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 8 11:15:33.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-52plq ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 11:15:34.140: INFO: stderr: "I0808 11:15:34.032034 639 log.go:172] (0xc00014c630) (0xc0005b34a0) Create stream\nI0808 11:15:34.032106 639 log.go:172] (0xc00014c630) (0xc0005b34a0) Stream added, broadcasting: 1\nI0808 11:15:34.034658 639 log.go:172] (0xc00014c630) Reply frame received for 1\nI0808 11:15:34.034714 639 log.go:172] (0xc00014c630) (0xc00054e000) Create stream\nI0808 11:15:34.034730 639 log.go:172] (0xc00014c630) (0xc00054e000) Stream added, broadcasting: 3\nI0808 11:15:34.035585 639 log.go:172] (0xc00014c630) Reply frame received for 3\nI0808 11:15:34.035645 639 log.go:172] (0xc00014c630) (0xc000356000) Create stream\nI0808 11:15:34.035666 639 log.go:172] (0xc00014c630) (0xc000356000) Stream added, broadcasting: 5\nI0808 11:15:34.036465 639 log.go:172] (0xc00014c630) Reply frame received for 5\nI0808 11:15:34.130833 639 log.go:172] (0xc00014c630) Data frame received for 5\nI0808 11:15:34.130873 639 log.go:172] (0xc000356000) (5) Data frame handling\nI0808 11:15:34.130962 639 log.go:172] (0xc00014c630) Data frame received for 3\nI0808 11:15:34.131035 639 log.go:172] (0xc00054e000) (3) Data frame handling\nI0808 11:15:34.131085 639 log.go:172] (0xc00054e000) (3) Data frame sent\nI0808 11:15:34.131109 639 log.go:172] (0xc00014c630) Data frame received for 3\nI0808 11:15:34.131128 639 log.go:172] (0xc00054e000) (3) Data frame handling\nI0808 11:15:34.133297 639 log.go:172] (0xc00014c630) Data frame received for 1\nI0808 11:15:34.133318 639 log.go:172] (0xc0005b34a0) (1) Data frame handling\nI0808 11:15:34.133336 639 log.go:172] (0xc0005b34a0) (1) Data frame sent\nI0808 11:15:34.133350 639 log.go:172] (0xc00014c630) (0xc0005b34a0) Stream removed, broadcasting: 1\nI0808 11:15:34.133365 639 log.go:172] (0xc00014c630) Go away received\nI0808 11:15:34.133732 639 log.go:172] (0xc00014c630) (0xc0005b34a0) Stream removed, broadcasting: 1\nI0808 11:15:34.133767 639 log.go:172] (0xc00014c630) (0xc00054e000) Stream removed, broadcasting: 3\nI0808 11:15:34.133783 639 log.go:172] (0xc00014c630) (0xc000356000) Stream removed, broadcasting: 5\n" Aug 8 11:15:34.140: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 11:15:34.140: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 8 11:15:34.140: INFO: Waiting for statefulset status.replicas updated to 0 Aug 8 11:15:34.143: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 8 11:15:44.152: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 8 11:15:44.152: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 8 11:15:44.153: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 8 11:15:44.212: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999644s Aug 8 11:15:45.218: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.946825823s Aug 8 11:15:46.223: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.94077452s Aug 8 11:15:47.228: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.935396s Aug 8 11:15:48.233: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.930971177s Aug 8 11:15:49.238: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.925766979s Aug 8 11:15:50.248: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.920479756s Aug 8 11:15:51.253: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.910711963s Aug 8 11:15:52.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.905442031s Aug 8 11:15:53.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 899.356993ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-52plq Aug 8 11:15:54.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-52plq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 8 11:15:54.482: INFO: stderr: "I0808 11:15:54.407516 661 log.go:172] (0xc00015c840) (0xc0007e54a0) Create stream\nI0808 11:15:54.407570 661 log.go:172] (0xc00015c840) (0xc0007e54a0) Stream added, broadcasting: 1\nI0808 11:15:54.410215 661 log.go:172] (0xc00015c840) Reply frame received for 1\nI0808 11:15:54.410267 661 log.go:172] (0xc00015c840) (0xc0005a6000) Create stream\nI0808 11:15:54.410283 661 log.go:172] (0xc00015c840) (0xc0005a6000) Stream added, broadcasting: 3\nI0808 11:15:54.411509 661 log.go:172] (0xc00015c840) Reply frame received for 3\nI0808 11:15:54.411569 661 log.go:172] (0xc00015c840) (0xc0007e5540) Create stream\nI0808 11:15:54.411591 661 log.go:172] (0xc00015c840) (0xc0007e5540) Stream added, broadcasting: 5\nI0808 11:15:54.413232 661 log.go:172] (0xc00015c840) Reply frame received for 5\nI0808 11:15:54.475466 661 log.go:172] (0xc00015c840) Data frame received for 3\nI0808 11:15:54.475510 661 log.go:172] (0xc0005a6000) (3) Data frame handling\nI0808 11:15:54.475531 661 log.go:172] (0xc0005a6000) (3) Data frame sent\nI0808 11:15:54.475548 661 log.go:172] (0xc00015c840) Data frame received for 5\nI0808 11:15:54.475571 661 log.go:172] (0xc0007e5540) (5) Data frame handling\nI0808 11:15:54.475599 661 log.go:172] (0xc00015c840) Data frame received for 3\nI0808 11:15:54.475612 661 log.go:172] (0xc0005a6000) (3) Data frame handling\nI0808 11:15:54.476998 661 log.go:172] (0xc00015c840) Data frame received for 1\nI0808 11:15:54.477014 661 log.go:172] (0xc0007e54a0) (1) Data frame handling\nI0808 11:15:54.477026 661 log.go:172] (0xc0007e54a0) (1) Data frame sent\nI0808 11:15:54.477040 661 log.go:172] (0xc00015c840) (0xc0007e54a0) Stream removed, broadcasting: 1\nI0808 11:15:54.477165 661 log.go:172] (0xc00015c840) (0xc0007e54a0) Stream removed, broadcasting: 1\nI0808 11:15:54.477177 661 log.go:172] (0xc00015c840) (0xc0005a6000) Stream removed, broadcasting: 3\nI0808 11:15:54.477262 661 log.go:172] (0xc00015c840) Go away received\nI0808 11:15:54.477318 661 log.go:172] (0xc00015c840) (0xc0007e5540) Stream removed, broadcasting: 5\n" Aug 8 11:15:54.482: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 8 11:15:54.482: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 8 11:15:54.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-52plq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 8 11:15:54.688: INFO: stderr: "I0808 11:15:54.604555 685 log.go:172] (0xc000138580) (0xc0001ad2c0) Create stream\nI0808 11:15:54.604612 685 log.go:172] (0xc000138580) (0xc0001ad2c0) Stream added, broadcasting: 1\nI0808 11:15:54.607483 685 log.go:172] (0xc000138580) Reply frame received for 1\nI0808 11:15:54.607516 685 log.go:172] (0xc000138580) (0xc0005c4000) Create stream\nI0808 11:15:54.607526 685 log.go:172] (0xc000138580) (0xc0005c4000) Stream added, broadcasting: 3\nI0808 11:15:54.608343 685 log.go:172] (0xc000138580) Reply frame received for 3\nI0808 11:15:54.608385 685 log.go:172] (0xc000138580) (0xc0001ad360) Create stream\nI0808 11:15:54.608393 685 log.go:172] (0xc000138580) (0xc0001ad360) Stream added, broadcasting: 5\nI0808 11:15:54.609469 685 log.go:172] (0xc000138580) Reply frame received for 5\nI0808 11:15:54.682245 685 log.go:172] (0xc000138580) Data frame received for 3\nI0808 11:15:54.682290 685 log.go:172] (0xc0005c4000) (3) Data frame handling\nI0808 11:15:54.682302 685 log.go:172] (0xc0005c4000) (3) Data frame sent\nI0808 11:15:54.682313 685 log.go:172] (0xc000138580) Data frame received for 3\nI0808 11:15:54.682331 685 log.go:172] (0xc0005c4000) (3) Data frame handling\nI0808 11:15:54.682377 685 log.go:172] (0xc000138580) Data frame received for 5\nI0808 11:15:54.682402 685 log.go:172] (0xc0001ad360) (5) Data frame handling\nI0808 11:15:54.683580 685 log.go:172] (0xc000138580) Data frame received for 1\nI0808 11:15:54.683601 685 log.go:172] (0xc0001ad2c0) (1) Data frame handling\nI0808 11:15:54.683615 685 log.go:172] (0xc0001ad2c0) (1) Data frame sent\nI0808 11:15:54.683636 685 log.go:172] (0xc000138580) (0xc0001ad2c0) Stream removed, broadcasting: 1\nI0808 11:15:54.683661 685 log.go:172] (0xc000138580) Go away received\nI0808 11:15:54.683890 685 log.go:172] (0xc000138580) (0xc0001ad2c0) Stream removed, broadcasting: 1\nI0808 11:15:54.683920 685 log.go:172] (0xc000138580) (0xc0005c4000) Stream removed, broadcasting: 3\nI0808 11:15:54.683940 685 log.go:172] (0xc000138580) (0xc0001ad360) Stream removed, broadcasting: 5\n" Aug 8 11:15:54.688: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 8 11:15:54.688: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 8 11:15:54.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-52plq ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 8 11:15:54.893: INFO: stderr: "I0808 11:15:54.798786 708 log.go:172] (0xc000138630) (0xc0005532c0) Create stream\nI0808 11:15:54.798854 708 log.go:172] (0xc000138630) (0xc0005532c0) Stream added, broadcasting: 1\nI0808 11:15:54.803104 708 log.go:172] (0xc000138630) Reply frame received for 1\nI0808 11:15:54.803147 708 log.go:172] (0xc000138630) (0xc000722000) Create stream\nI0808 11:15:54.803160 708 log.go:172] (0xc000138630) (0xc000722000) Stream added, broadcasting: 3\nI0808 11:15:54.805321 708 log.go:172] (0xc000138630) Reply frame received for 3\nI0808 11:15:54.805447 708 log.go:172] (0xc000138630) (0xc000346000) Create stream\nI0808 11:15:54.805515 708 log.go:172] (0xc000138630) (0xc000346000) Stream added, broadcasting: 5\nI0808 11:15:54.811226 708 log.go:172] (0xc000138630) Reply frame received for 5\nI0808 11:15:54.886525 708 log.go:172] (0xc000138630) Data frame received for 3\nI0808 11:15:54.886569 708 log.go:172] (0xc000722000) (3) Data frame handling\nI0808 11:15:54.886590 708 log.go:172] (0xc000722000) (3) Data frame sent\nI0808 11:15:54.886604 708 log.go:172] (0xc000138630) Data frame received for 3\nI0808 11:15:54.886614 708 log.go:172] (0xc000722000) (3) Data frame handling\nI0808 11:15:54.886738 708 log.go:172] (0xc000138630) Data frame received for 5\nI0808 11:15:54.886759 708 log.go:172] (0xc000346000) (5) Data frame handling\nI0808 11:15:54.888391 708 log.go:172] (0xc000138630) Data frame received for 1\nI0808 11:15:54.888410 708 log.go:172] (0xc0005532c0) (1) Data frame handling\nI0808 11:15:54.888423 708 log.go:172] (0xc0005532c0) (1) Data frame sent\nI0808 11:15:54.888435 708 log.go:172] (0xc000138630) (0xc0005532c0) Stream removed, broadcasting: 1\nI0808 11:15:54.888535 708 log.go:172] (0xc000138630) Go away received\nI0808 11:15:54.888626 708 log.go:172] (0xc000138630) (0xc0005532c0) Stream removed, broadcasting: 1\nI0808 11:15:54.888650 708 log.go:172] (0xc000138630) (0xc000722000) Stream removed, broadcasting: 3\nI0808 11:15:54.888658 708 log.go:172] (0xc000138630) (0xc000346000) Stream removed, broadcasting: 5\n" Aug 8 11:15:54.893: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 8 11:15:54.893: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 8 11:15:54.893: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 8 11:16:24.968: INFO: Deleting all statefulset in ns e2e-tests-statefulset-52plq Aug 8 11:16:24.971: INFO: Scaling statefulset ss to 0 Aug 8 11:16:24.978: INFO: Waiting for statefulset status.replicas updated to 0 Aug 8 11:16:24.980: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:16:24.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-52plq" for this suite. Aug 8 11:16:31.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:16:31.282: INFO: namespace: e2e-tests-statefulset-52plq, resource: bindings, ignored listing per whitelist Aug 8 11:16:31.291: INFO: namespace e2e-tests-statefulset-52plq deletion completed in 6.294828627s • [SLOW TEST:98.733 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:16:31.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-9c8a4c6a-d968-11ea-aaa1-0242ac11000c STEP: Creating configMap with name cm-test-opt-upd-9c8a4cc4-d968-11ea-aaa1-0242ac11000c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9c8a4c6a-d968-11ea-aaa1-0242ac11000c STEP: Updating configmap cm-test-opt-upd-9c8a4cc4-d968-11ea-aaa1-0242ac11000c STEP: Creating configMap with name cm-test-opt-create-9c8a4ce4-d968-11ea-aaa1-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:16:39.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8ptx2" for this suite. Aug 8 11:17:01.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:17:01.610: INFO: namespace: e2e-tests-projected-8ptx2, resource: bindings, ignored listing per whitelist Aug 8 11:17:01.648: INFO: namespace e2e-tests-projected-8ptx2 deletion completed in 22.100244497s • [SLOW TEST:30.357 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:17:01.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:17:31.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-6vp8p" for this suite. Aug 8 11:17:37.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:17:37.585: INFO: namespace: e2e-tests-container-runtime-6vp8p, resource: bindings, ignored listing per whitelist Aug 8 11:17:37.590: INFO: namespace e2e-tests-container-runtime-6vp8p deletion completed in 6.112911335s • [SLOW TEST:35.942 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:17:37.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:17:37.707: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c409aaab-d968-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-lrr46" to be "success or failure" Aug 8 11:17:37.724: INFO: Pod "downwardapi-volume-c409aaab-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.07146ms Aug 8 11:17:39.758: INFO: Pod "downwardapi-volume-c409aaab-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051130305s Aug 8 11:17:41.762: INFO: Pod "downwardapi-volume-c409aaab-d968-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055175779s STEP: Saw pod success Aug 8 11:17:41.762: INFO: Pod "downwardapi-volume-c409aaab-d968-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:17:41.765: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c409aaab-d968-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:17:41.931: INFO: Waiting for pod downwardapi-volume-c409aaab-d968-11ea-aaa1-0242ac11000c to disappear Aug 8 11:17:41.970: INFO: Pod downwardapi-volume-c409aaab-d968-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:17:41.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lrr46" for this suite. Aug 8 11:17:47.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:17:48.051: INFO: namespace: e2e-tests-projected-lrr46, resource: bindings, ignored listing per whitelist Aug 8 11:17:48.067: INFO: namespace e2e-tests-projected-lrr46 deletion completed in 6.092802747s • [SLOW TEST:10.476 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:17:48.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 8 11:17:48.173: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:17:53.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7slrl" for this suite. Aug 8 11:17:59.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:17:59.677: INFO: namespace: e2e-tests-init-container-7slrl, resource: bindings, ignored listing per whitelist Aug 8 11:17:59.691: INFO: namespace e2e-tests-init-container-7slrl deletion completed in 6.126546588s • [SLOW TEST:11.624 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:17:59.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Aug 8 11:17:59.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Aug 8 11:17:59.927: INFO: stderr: "" Aug 8 11:17:59.927: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:17:59.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2dpw4" for this suite. Aug 8 11:18:05.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:18:06.042: INFO: namespace: e2e-tests-kubectl-2dpw4, resource: bindings, ignored listing per whitelist Aug 8 11:18:06.061: INFO: namespace e2e-tests-kubectl-2dpw4 deletion completed in 6.129314231s • [SLOW TEST:6.369 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:18:06.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Aug 8 11:18:06.229: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Aug 8 11:18:06.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:06.562: INFO: stderr: "" Aug 8 11:18:06.562: INFO: stdout: "service/redis-slave created\n" Aug 8 11:18:06.563: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Aug 8 11:18:06.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:06.810: INFO: stderr: "" Aug 8 11:18:06.810: INFO: stdout: "service/redis-master created\n" Aug 8 11:18:06.810: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 8 11:18:06.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:07.118: INFO: stderr: "" Aug 8 11:18:07.118: INFO: stdout: "service/frontend created\n" Aug 8 11:18:07.118: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Aug 8 11:18:07.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:07.463: INFO: stderr: "" Aug 8 11:18:07.463: INFO: stdout: "deployment.extensions/frontend created\n" Aug 8 11:18:07.464: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 8 11:18:07.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:07.829: INFO: stderr: "" Aug 8 11:18:07.829: INFO: stdout: "deployment.extensions/redis-master created\n" Aug 8 11:18:07.829: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Aug 8 11:18:07.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:08.076: INFO: stderr: "" Aug 8 11:18:08.076: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Aug 8 11:18:08.076: INFO: Waiting for all frontend pods to be Running. Aug 8 11:18:18.127: INFO: Waiting for frontend to serve content. Aug 8 11:18:18.145: INFO: Trying to add a new entry to the guestbook. Aug 8 11:18:18.159: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 8 11:18:18.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:18.301: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 8 11:18:18.301: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Aug 8 11:18:18.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:18.506: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 8 11:18:18.506: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 8 11:18:18.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:18.681: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 8 11:18:18.681: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 8 11:18:18.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:18.791: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 8 11:18:18.791: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 8 11:18:18.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:18.908: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 8 11:18:18.908: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 8 11:18:18.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8gqc9' Aug 8 11:18:19.138: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 8 11:18:19.139: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:18:19.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8gqc9" for this suite. Aug 8 11:18:57.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:18:57.351: INFO: namespace: e2e-tests-kubectl-8gqc9, resource: bindings, ignored listing per whitelist Aug 8 11:18:57.436: INFO: namespace e2e-tests-kubectl-8gqc9 deletion completed in 38.27032903s • [SLOW TEST:51.375 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:18:57.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:18:57.550: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3a0dc90-d968-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-zg9lm" to be "success or failure" Aug 8 11:18:57.584: INFO: Pod "downwardapi-volume-f3a0dc90-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.203912ms Aug 8 11:18:59.588: INFO: Pod "downwardapi-volume-f3a0dc90-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038051008s Aug 8 11:19:01.591: INFO: Pod "downwardapi-volume-f3a0dc90-d968-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041783522s STEP: Saw pod success Aug 8 11:19:01.591: INFO: Pod "downwardapi-volume-f3a0dc90-d968-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:19:01.594: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f3a0dc90-d968-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:19:01.628: INFO: Waiting for pod downwardapi-volume-f3a0dc90-d968-11ea-aaa1-0242ac11000c to disappear Aug 8 11:19:01.638: INFO: Pod downwardapi-volume-f3a0dc90-d968-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:19:01.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zg9lm" for this suite. Aug 8 11:19:07.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:19:07.688: INFO: namespace: e2e-tests-downward-api-zg9lm, resource: bindings, ignored listing per whitelist Aug 8 11:19:07.725: INFO: namespace e2e-tests-downward-api-zg9lm deletion completed in 6.084318514s • [SLOW TEST:10.289 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:19:07.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:19:07.886: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9c852a4-d968-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-snl2t" to be "success or failure" Aug 8 11:19:07.902: INFO: Pod "downwardapi-volume-f9c852a4-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.157252ms Aug 8 11:19:09.906: INFO: Pod "downwardapi-volume-f9c852a4-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020458888s Aug 8 11:19:11.910: INFO: Pod "downwardapi-volume-f9c852a4-d968-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024715485s STEP: Saw pod success Aug 8 11:19:11.910: INFO: Pod "downwardapi-volume-f9c852a4-d968-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:19:11.913: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f9c852a4-d968-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:19:11.976: INFO: Waiting for pod downwardapi-volume-f9c852a4-d968-11ea-aaa1-0242ac11000c to disappear Aug 8 11:19:11.985: INFO: Pod downwardapi-volume-f9c852a4-d968-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:19:11.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-snl2t" for this suite. Aug 8 11:19:18.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:19:18.075: INFO: namespace: e2e-tests-downward-api-snl2t, resource: bindings, ignored listing per whitelist Aug 8 11:19:18.091: INFO: namespace e2e-tests-downward-api-snl2t deletion completed in 6.102625443s • [SLOW TEST:10.366 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:19:18.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-ffed153b-d968-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 11:19:18.260: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ffef968d-d968-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-jhz8l" to be "success or failure" Aug 8 11:19:18.333: INFO: Pod "pod-projected-secrets-ffef968d-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 72.826957ms Aug 8 11:19:20.337: INFO: Pod "pod-projected-secrets-ffef968d-d968-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076941635s Aug 8 11:19:22.341: INFO: Pod "pod-projected-secrets-ffef968d-d968-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081554824s STEP: Saw pod success Aug 8 11:19:22.342: INFO: Pod "pod-projected-secrets-ffef968d-d968-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:19:22.370: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-ffef968d-d968-11ea-aaa1-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 8 11:19:22.401: INFO: Waiting for pod pod-projected-secrets-ffef968d-d968-11ea-aaa1-0242ac11000c to disappear Aug 8 11:19:22.423: INFO: Pod pod-projected-secrets-ffef968d-d968-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:19:22.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jhz8l" for this suite. Aug 8 11:19:28.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:19:28.614: INFO: namespace: e2e-tests-projected-jhz8l, resource: bindings, ignored listing per whitelist Aug 8 11:19:28.717: INFO: namespace e2e-tests-projected-jhz8l deletion completed in 6.290787991s • [SLOW TEST:10.626 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:19:28.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-hrdmm I0808 11:19:28.848984 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-hrdmm, replica count: 1 I0808 11:19:29.899494 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0808 11:19:30.899745 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0808 11:19:31.900000 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0808 11:19:32.900277 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 8 11:19:33.041: INFO: Created: latency-svc-2ptvw Aug 8 11:19:33.056: INFO: Got endpoints: latency-svc-2ptvw [55.911363ms] Aug 8 11:19:33.082: INFO: Created: latency-svc-6jbnw Aug 8 11:19:33.095: INFO: Got endpoints: latency-svc-6jbnw [39.150604ms] Aug 8 11:19:33.112: INFO: Created: latency-svc-jjg8q Aug 8 11:19:33.185: INFO: Got endpoints: latency-svc-jjg8q [128.482493ms] Aug 8 11:19:33.229: INFO: Created: latency-svc-qprpg Aug 8 11:19:33.251: INFO: Got endpoints: latency-svc-qprpg [194.361976ms] Aug 8 11:19:33.323: INFO: Created: latency-svc-5xlgb Aug 8 11:19:33.326: INFO: Got endpoints: latency-svc-5xlgb [269.545828ms] Aug 8 11:19:33.353: INFO: Created: latency-svc-cbg6t Aug 8 11:19:33.395: INFO: Got endpoints: latency-svc-cbg6t [338.650544ms] Aug 8 11:19:33.466: INFO: Created: latency-svc-rh5rr Aug 8 11:19:33.469: INFO: Got endpoints: latency-svc-rh5rr [412.935794ms] Aug 8 11:19:33.490: INFO: Created: latency-svc-l8hmz Aug 8 11:19:33.503: INFO: Got endpoints: latency-svc-l8hmz [447.044742ms] Aug 8 11:19:33.520: INFO: Created: latency-svc-c8xmw Aug 8 11:19:33.534: INFO: Got endpoints: latency-svc-c8xmw [477.629898ms] Aug 8 11:19:33.556: INFO: Created: latency-svc-gmk7w Aug 8 11:19:33.604: INFO: Got endpoints: latency-svc-gmk7w [547.245671ms] Aug 8 11:19:33.623: INFO: Created: latency-svc-74xxg Aug 8 11:19:33.638: INFO: Got endpoints: latency-svc-74xxg [581.226995ms] Aug 8 11:19:33.658: INFO: Created: latency-svc-rcqxk Aug 8 11:19:33.701: INFO: Got endpoints: latency-svc-rcqxk [644.272522ms] Aug 8 11:19:33.767: INFO: Created: latency-svc-l8jbp Aug 8 11:19:33.781: INFO: Got endpoints: latency-svc-l8jbp [725.363485ms] Aug 8 11:19:33.802: INFO: Created: latency-svc-4gv5j Aug 8 11:19:33.818: INFO: Got endpoints: latency-svc-4gv5j [761.550514ms] Aug 8 11:19:33.854: INFO: Created: latency-svc-5zkkt Aug 8 11:19:33.897: INFO: Got endpoints: latency-svc-5zkkt [840.704755ms] Aug 8 11:19:33.910: INFO: Created: latency-svc-r2d5k Aug 8 11:19:33.926: INFO: Got endpoints: latency-svc-r2d5k [869.332774ms] Aug 8 11:19:33.946: INFO: Created: latency-svc-pctqf Aug 8 11:19:33.957: INFO: Got endpoints: latency-svc-pctqf [861.628835ms] Aug 8 11:19:33.988: INFO: Created: latency-svc-vgjtz Aug 8 11:19:34.059: INFO: Got endpoints: latency-svc-vgjtz [873.629632ms] Aug 8 11:19:34.085: INFO: Created: latency-svc-rtqqp Aug 8 11:19:34.101: INFO: Got endpoints: latency-svc-rtqqp [850.165909ms] Aug 8 11:19:34.127: INFO: Created: latency-svc-549mh Aug 8 11:19:34.143: INFO: Got endpoints: latency-svc-549mh [817.183861ms] Aug 8 11:19:34.221: INFO: Created: latency-svc-s9glm Aug 8 11:19:34.224: INFO: Got endpoints: latency-svc-s9glm [828.681168ms] Aug 8 11:19:34.258: INFO: Created: latency-svc-7bxn8 Aug 8 11:19:34.270: INFO: Got endpoints: latency-svc-7bxn8 [800.234526ms] Aug 8 11:19:34.288: INFO: Created: latency-svc-k9rpm Aug 8 11:19:34.300: INFO: Got endpoints: latency-svc-k9rpm [796.261685ms] Aug 8 11:19:34.318: INFO: Created: latency-svc-s8vq4 Aug 8 11:19:34.383: INFO: Got endpoints: latency-svc-s8vq4 [849.080168ms] Aug 8 11:19:34.409: INFO: Created: latency-svc-59cmw Aug 8 11:19:34.421: INFO: Got endpoints: latency-svc-59cmw [817.204803ms] Aug 8 11:19:34.437: INFO: Created: latency-svc-plc2t Aug 8 11:19:34.461: INFO: Got endpoints: latency-svc-plc2t [823.461342ms] Aug 8 11:19:34.532: INFO: Created: latency-svc-hsl68 Aug 8 11:19:34.546: INFO: Got endpoints: latency-svc-hsl68 [845.507273ms] Aug 8 11:19:34.576: INFO: Created: latency-svc-b22xm Aug 8 11:19:34.590: INFO: Got endpoints: latency-svc-b22xm [808.071238ms] Aug 8 11:19:34.630: INFO: Created: latency-svc-snwhp Aug 8 11:19:34.700: INFO: Got endpoints: latency-svc-snwhp [881.575358ms] Aug 8 11:19:34.707: INFO: Created: latency-svc-zvhvv Aug 8 11:19:34.710: INFO: Got endpoints: latency-svc-zvhvv [813.245643ms] Aug 8 11:19:34.744: INFO: Created: latency-svc-sf67q Aug 8 11:19:34.758: INFO: Got endpoints: latency-svc-sf67q [832.535111ms] Aug 8 11:19:34.780: INFO: Created: latency-svc-ssnn4 Aug 8 11:19:34.795: INFO: Got endpoints: latency-svc-ssnn4 [837.759546ms] Aug 8 11:19:34.862: INFO: Created: latency-svc-4jmq8 Aug 8 11:19:34.866: INFO: Got endpoints: latency-svc-4jmq8 [807.007442ms] Aug 8 11:19:34.895: INFO: Created: latency-svc-dq8lg Aug 8 11:19:34.909: INFO: Got endpoints: latency-svc-dq8lg [808.648946ms] Aug 8 11:19:34.930: INFO: Created: latency-svc-78mdj Aug 8 11:19:34.940: INFO: Got endpoints: latency-svc-78mdj [796.85034ms] Aug 8 11:19:34.960: INFO: Created: latency-svc-ldgr8 Aug 8 11:19:35.017: INFO: Got endpoints: latency-svc-ldgr8 [793.459671ms] Aug 8 11:19:35.032: INFO: Created: latency-svc-mm2s5 Aug 8 11:19:35.062: INFO: Got endpoints: latency-svc-mm2s5 [792.052178ms] Aug 8 11:19:35.097: INFO: Created: latency-svc-zbhdl Aug 8 11:19:35.108: INFO: Got endpoints: latency-svc-zbhdl [808.811682ms] Aug 8 11:19:35.162: INFO: Created: latency-svc-b9qlq Aug 8 11:19:35.165: INFO: Got endpoints: latency-svc-b9qlq [781.645073ms] Aug 8 11:19:35.206: INFO: Created: latency-svc-v8qsc Aug 8 11:19:35.236: INFO: Got endpoints: latency-svc-v8qsc [815.208051ms] Aug 8 11:19:35.338: INFO: Created: latency-svc-2mz88 Aug 8 11:19:35.367: INFO: Got endpoints: latency-svc-2mz88 [906.27665ms] Aug 8 11:19:35.386: INFO: Created: latency-svc-flvc7 Aug 8 11:19:35.416: INFO: Got endpoints: latency-svc-flvc7 [869.249077ms] Aug 8 11:19:35.434: INFO: Created: latency-svc-8fw2w Aug 8 11:19:35.478: INFO: Got endpoints: latency-svc-8fw2w [888.446239ms] Aug 8 11:19:35.518: INFO: Created: latency-svc-cqlsv Aug 8 11:19:35.530: INFO: Got endpoints: latency-svc-cqlsv [830.297475ms] Aug 8 11:19:35.572: INFO: Created: latency-svc-crnh5 Aug 8 11:19:35.634: INFO: Got endpoints: latency-svc-crnh5 [923.56819ms] Aug 8 11:19:35.662: INFO: Created: latency-svc-p8mlh Aug 8 11:19:35.698: INFO: Got endpoints: latency-svc-p8mlh [939.077232ms] Aug 8 11:19:35.784: INFO: Created: latency-svc-kwjxq Aug 8 11:19:35.786: INFO: Created: latency-svc-g895l Aug 8 11:19:35.801: INFO: Got endpoints: latency-svc-g895l [935.364198ms] Aug 8 11:19:35.801: INFO: Got endpoints: latency-svc-kwjxq [1.006332111s] Aug 8 11:19:35.836: INFO: Created: latency-svc-j92cq Aug 8 11:19:35.861: INFO: Got endpoints: latency-svc-j92cq [951.875105ms] Aug 8 11:19:35.885: INFO: Created: latency-svc-gd45s Aug 8 11:19:35.934: INFO: Got endpoints: latency-svc-gd45s [993.875047ms] Aug 8 11:19:35.962: INFO: Created: latency-svc-97wvc Aug 8 11:19:35.976: INFO: Got endpoints: latency-svc-97wvc [958.457158ms] Aug 8 11:19:35.999: INFO: Created: latency-svc-vvnm8 Aug 8 11:19:36.012: INFO: Got endpoints: latency-svc-vvnm8 [950.425309ms] Aug 8 11:19:36.083: INFO: Created: latency-svc-nmzl9 Aug 8 11:19:36.086: INFO: Got endpoints: latency-svc-nmzl9 [977.396531ms] Aug 8 11:19:36.135: INFO: Created: latency-svc-b5q75 Aug 8 11:19:36.151: INFO: Got endpoints: latency-svc-b5q75 [986.306431ms] Aug 8 11:19:36.251: INFO: Created: latency-svc-gkbfr Aug 8 11:19:36.255: INFO: Got endpoints: latency-svc-gkbfr [1.018444501s] Aug 8 11:19:36.301: INFO: Created: latency-svc-z4vzq Aug 8 11:19:36.313: INFO: Got endpoints: latency-svc-z4vzq [945.853318ms] Aug 8 11:19:36.341: INFO: Created: latency-svc-m5fdm Aug 8 11:19:36.388: INFO: Got endpoints: latency-svc-m5fdm [972.709921ms] Aug 8 11:19:36.406: INFO: Created: latency-svc-5rbsg Aug 8 11:19:36.435: INFO: Got endpoints: latency-svc-5rbsg [956.47691ms] Aug 8 11:19:36.465: INFO: Created: latency-svc-nvdgw Aug 8 11:19:36.476: INFO: Got endpoints: latency-svc-nvdgw [946.285988ms] Aug 8 11:19:36.540: INFO: Created: latency-svc-k9rnj Aug 8 11:19:36.542: INFO: Got endpoints: latency-svc-k9rnj [907.63842ms] Aug 8 11:19:36.568: INFO: Created: latency-svc-g74sh Aug 8 11:19:36.579: INFO: Got endpoints: latency-svc-g74sh [881.402821ms] Aug 8 11:19:36.604: INFO: Created: latency-svc-ltnrg Aug 8 11:19:36.621: INFO: Got endpoints: latency-svc-ltnrg [820.191267ms] Aug 8 11:19:36.694: INFO: Created: latency-svc-nk669 Aug 8 11:19:36.696: INFO: Got endpoints: latency-svc-nk669 [895.28183ms] Aug 8 11:19:36.729: INFO: Created: latency-svc-dkzzv Aug 8 11:19:36.741: INFO: Got endpoints: latency-svc-dkzzv [879.918435ms] Aug 8 11:19:36.760: INFO: Created: latency-svc-dvqtp Aug 8 11:19:36.772: INFO: Got endpoints: latency-svc-dvqtp [837.604484ms] Aug 8 11:19:36.789: INFO: Created: latency-svc-886nq Aug 8 11:19:36.837: INFO: Got endpoints: latency-svc-886nq [861.593278ms] Aug 8 11:19:36.844: INFO: Created: latency-svc-6fv4t Aug 8 11:19:36.863: INFO: Got endpoints: latency-svc-6fv4t [850.118122ms] Aug 8 11:19:36.886: INFO: Created: latency-svc-x6cl2 Aug 8 11:19:36.899: INFO: Got endpoints: latency-svc-x6cl2 [813.216186ms] Aug 8 11:19:37.000: INFO: Created: latency-svc-jpshx Aug 8 11:19:37.004: INFO: Got endpoints: latency-svc-jpshx [852.402407ms] Aug 8 11:19:37.059: INFO: Created: latency-svc-f95b8 Aug 8 11:19:37.067: INFO: Got endpoints: latency-svc-f95b8 [812.540876ms] Aug 8 11:19:37.096: INFO: Created: latency-svc-plwck Aug 8 11:19:37.143: INFO: Got endpoints: latency-svc-plwck [829.967997ms] Aug 8 11:19:37.161: INFO: Created: latency-svc-v74lm Aug 8 11:19:37.176: INFO: Got endpoints: latency-svc-v74lm [787.642854ms] Aug 8 11:19:37.198: INFO: Created: latency-svc-7v87n Aug 8 11:19:37.213: INFO: Got endpoints: latency-svc-7v87n [778.219871ms] Aug 8 11:19:37.343: INFO: Created: latency-svc-k7nr7 Aug 8 11:19:37.345: INFO: Got endpoints: latency-svc-k7nr7 [868.212124ms] Aug 8 11:19:37.395: INFO: Created: latency-svc-9b8gw Aug 8 11:19:37.411: INFO: Got endpoints: latency-svc-9b8gw [868.864612ms] Aug 8 11:19:37.431: INFO: Created: latency-svc-krkrq Aug 8 11:19:37.484: INFO: Got endpoints: latency-svc-krkrq [904.77465ms] Aug 8 11:19:37.497: INFO: Created: latency-svc-2dz6w Aug 8 11:19:37.527: INFO: Got endpoints: latency-svc-2dz6w [905.601918ms] Aug 8 11:19:37.558: INFO: Created: latency-svc-qwr8p Aug 8 11:19:37.575: INFO: Got endpoints: latency-svc-qwr8p [878.29118ms] Aug 8 11:19:37.634: INFO: Created: latency-svc-2wzwk Aug 8 11:19:37.640: INFO: Got endpoints: latency-svc-2wzwk [898.159218ms] Aug 8 11:19:37.714: INFO: Created: latency-svc-2rw8v Aug 8 11:19:37.724: INFO: Got endpoints: latency-svc-2rw8v [952.015685ms] Aug 8 11:19:37.796: INFO: Created: latency-svc-ptw2n Aug 8 11:19:37.800: INFO: Got endpoints: latency-svc-ptw2n [962.411474ms] Aug 8 11:19:37.851: INFO: Created: latency-svc-4xwnm Aug 8 11:19:37.868: INFO: Got endpoints: latency-svc-4xwnm [1.005814776s] Aug 8 11:19:37.887: INFO: Created: latency-svc-zqpf9 Aug 8 11:19:37.933: INFO: Got endpoints: latency-svc-zqpf9 [1.034130346s] Aug 8 11:19:37.947: INFO: Created: latency-svc-lgtjf Aug 8 11:19:37.959: INFO: Got endpoints: latency-svc-lgtjf [955.080987ms] Aug 8 11:19:37.977: INFO: Created: latency-svc-4wx8q Aug 8 11:19:37.989: INFO: Got endpoints: latency-svc-4wx8q [922.162062ms] Aug 8 11:19:38.007: INFO: Created: latency-svc-ppw7d Aug 8 11:19:38.022: INFO: Got endpoints: latency-svc-ppw7d [878.445058ms] Aug 8 11:19:38.084: INFO: Created: latency-svc-npr6b Aug 8 11:19:38.087: INFO: Got endpoints: latency-svc-npr6b [910.85239ms] Aug 8 11:19:38.139: INFO: Created: latency-svc-pnrtf Aug 8 11:19:38.153: INFO: Got endpoints: latency-svc-pnrtf [939.46039ms] Aug 8 11:19:38.239: INFO: Created: latency-svc-trfmv Aug 8 11:19:38.242: INFO: Got endpoints: latency-svc-trfmv [896.985319ms] Aug 8 11:19:38.307: INFO: Created: latency-svc-kdrkm Aug 8 11:19:38.336: INFO: Got endpoints: latency-svc-kdrkm [924.965244ms] Aug 8 11:19:38.394: INFO: Created: latency-svc-hjgbd Aug 8 11:19:38.411: INFO: Got endpoints: latency-svc-hjgbd [927.389712ms] Aug 8 11:19:38.445: INFO: Created: latency-svc-vgv82 Aug 8 11:19:38.460: INFO: Got endpoints: latency-svc-vgv82 [932.436113ms] Aug 8 11:19:38.481: INFO: Created: latency-svc-4fpx7 Aug 8 11:19:38.532: INFO: Got endpoints: latency-svc-4fpx7 [956.757246ms] Aug 8 11:19:38.535: INFO: Created: latency-svc-bkbr9 Aug 8 11:19:38.551: INFO: Got endpoints: latency-svc-bkbr9 [910.865619ms] Aug 8 11:19:38.577: INFO: Created: latency-svc-4qd7l Aug 8 11:19:38.586: INFO: Got endpoints: latency-svc-4qd7l [862.279518ms] Aug 8 11:19:38.609: INFO: Created: latency-svc-w7f4g Aug 8 11:19:38.618: INFO: Got endpoints: latency-svc-w7f4g [818.09814ms] Aug 8 11:19:38.682: INFO: Created: latency-svc-8wb7f Aug 8 11:19:38.702: INFO: Got endpoints: latency-svc-8wb7f [834.037456ms] Aug 8 11:19:38.758: INFO: Created: latency-svc-29tvn Aug 8 11:19:38.779: INFO: Got endpoints: latency-svc-29tvn [846.122152ms] Aug 8 11:19:38.833: INFO: Created: latency-svc-c6lxk Aug 8 11:19:38.839: INFO: Got endpoints: latency-svc-c6lxk [880.320666ms] Aug 8 11:19:38.859: INFO: Created: latency-svc-rwxc9 Aug 8 11:19:38.870: INFO: Got endpoints: latency-svc-rwxc9 [880.744435ms] Aug 8 11:19:38.901: INFO: Created: latency-svc-w5qjw Aug 8 11:19:38.925: INFO: Got endpoints: latency-svc-w5qjw [902.586574ms] Aug 8 11:19:38.964: INFO: Created: latency-svc-fszwv Aug 8 11:19:38.978: INFO: Got endpoints: latency-svc-fszwv [891.393409ms] Aug 8 11:19:38.996: INFO: Created: latency-svc-nvppb Aug 8 11:19:39.009: INFO: Got endpoints: latency-svc-nvppb [856.076445ms] Aug 8 11:19:39.045: INFO: Created: latency-svc-58d7g Aug 8 11:19:39.101: INFO: Got endpoints: latency-svc-58d7g [859.220697ms] Aug 8 11:19:39.118: INFO: Created: latency-svc-pfqw8 Aug 8 11:19:39.129: INFO: Got endpoints: latency-svc-pfqw8 [793.588932ms] Aug 8 11:19:39.152: INFO: Created: latency-svc-mzwml Aug 8 11:19:39.166: INFO: Got endpoints: latency-svc-mzwml [754.551813ms] Aug 8 11:19:39.188: INFO: Created: latency-svc-kjc59 Aug 8 11:19:39.251: INFO: Got endpoints: latency-svc-kjc59 [790.983579ms] Aug 8 11:19:39.279: INFO: Created: latency-svc-n2whm Aug 8 11:19:39.310: INFO: Got endpoints: latency-svc-n2whm [778.363792ms] Aug 8 11:19:39.344: INFO: Created: latency-svc-4b4gd Aug 8 11:19:39.400: INFO: Got endpoints: latency-svc-4b4gd [849.537265ms] Aug 8 11:19:39.422: INFO: Created: latency-svc-jq79f Aug 8 11:19:39.437: INFO: Got endpoints: latency-svc-jq79f [850.580672ms] Aug 8 11:19:39.465: INFO: Created: latency-svc-b5zj5 Aug 8 11:19:39.479: INFO: Got endpoints: latency-svc-b5zj5 [861.023ms] Aug 8 11:19:39.556: INFO: Created: latency-svc-kgn9l Aug 8 11:19:39.559: INFO: Got endpoints: latency-svc-kgn9l [856.816513ms] Aug 8 11:19:39.591: INFO: Created: latency-svc-mqq4d Aug 8 11:19:39.599: INFO: Got endpoints: latency-svc-mqq4d [819.68237ms] Aug 8 11:19:39.639: INFO: Created: latency-svc-9ql9x Aug 8 11:19:39.654: INFO: Got endpoints: latency-svc-9ql9x [815.035203ms] Aug 8 11:19:39.719: INFO: Created: latency-svc-g8gtp Aug 8 11:19:39.752: INFO: Got endpoints: latency-svc-g8gtp [881.690726ms] Aug 8 11:19:39.801: INFO: Created: latency-svc-98h6m Aug 8 11:19:39.880: INFO: Got endpoints: latency-svc-98h6m [955.413272ms] Aug 8 11:19:39.904: INFO: Created: latency-svc-kk27w Aug 8 11:19:39.919: INFO: Got endpoints: latency-svc-kk27w [940.776642ms] Aug 8 11:19:39.938: INFO: Created: latency-svc-qsf7p Aug 8 11:19:39.949: INFO: Got endpoints: latency-svc-qsf7p [940.49304ms] Aug 8 11:19:39.968: INFO: Created: latency-svc-rvznh Aug 8 11:19:40.053: INFO: Got endpoints: latency-svc-rvznh [951.792351ms] Aug 8 11:19:40.059: INFO: Created: latency-svc-tb77s Aug 8 11:19:40.070: INFO: Got endpoints: latency-svc-tb77s [940.26346ms] Aug 8 11:19:40.094: INFO: Created: latency-svc-gj4kz Aug 8 11:19:40.106: INFO: Got endpoints: latency-svc-gj4kz [939.882492ms] Aug 8 11:19:40.124: INFO: Created: latency-svc-d89j5 Aug 8 11:19:40.136: INFO: Got endpoints: latency-svc-d89j5 [885.737841ms] Aug 8 11:19:40.203: INFO: Created: latency-svc-8gltb Aug 8 11:19:40.206: INFO: Got endpoints: latency-svc-8gltb [895.861837ms] Aug 8 11:19:40.233: INFO: Created: latency-svc-68n42 Aug 8 11:19:40.245: INFO: Got endpoints: latency-svc-68n42 [845.03264ms] Aug 8 11:19:40.287: INFO: Created: latency-svc-4rc5t Aug 8 11:19:40.352: INFO: Got endpoints: latency-svc-4rc5t [915.624265ms] Aug 8 11:19:40.371: INFO: Created: latency-svc-w4v4g Aug 8 11:19:40.396: INFO: Got endpoints: latency-svc-w4v4g [916.57959ms] Aug 8 11:19:40.436: INFO: Created: latency-svc-sck7n Aug 8 11:19:40.449: INFO: Got endpoints: latency-svc-sck7n [889.95077ms] Aug 8 11:19:40.502: INFO: Created: latency-svc-tjqfh Aug 8 11:19:40.526: INFO: Got endpoints: latency-svc-tjqfh [926.394805ms] Aug 8 11:19:40.556: INFO: Created: latency-svc-xzqps Aug 8 11:19:40.580: INFO: Got endpoints: latency-svc-xzqps [925.581619ms] Aug 8 11:19:40.647: INFO: Created: latency-svc-nvpsx Aug 8 11:19:40.653: INFO: Got endpoints: latency-svc-nvpsx [900.59925ms] Aug 8 11:19:40.700: INFO: Created: latency-svc-2dkrd Aug 8 11:19:40.727: INFO: Got endpoints: latency-svc-2dkrd [847.107483ms] Aug 8 11:19:40.796: INFO: Created: latency-svc-k6w8c Aug 8 11:19:40.800: INFO: Got endpoints: latency-svc-k6w8c [880.322337ms] Aug 8 11:19:40.832: INFO: Created: latency-svc-xk25f Aug 8 11:19:40.841: INFO: Got endpoints: latency-svc-xk25f [891.887989ms] Aug 8 11:19:40.868: INFO: Created: latency-svc-ttr4d Aug 8 11:19:40.878: INFO: Got endpoints: latency-svc-ttr4d [824.748753ms] Aug 8 11:19:40.963: INFO: Created: latency-svc-l94v7 Aug 8 11:19:40.967: INFO: Got endpoints: latency-svc-l94v7 [897.125008ms] Aug 8 11:19:41.000: INFO: Created: latency-svc-p8fmd Aug 8 11:19:41.016: INFO: Got endpoints: latency-svc-p8fmd [910.304829ms] Aug 8 11:19:41.107: INFO: Created: latency-svc-vgws6 Aug 8 11:19:41.111: INFO: Got endpoints: latency-svc-vgws6 [974.748799ms] Aug 8 11:19:41.162: INFO: Created: latency-svc-btlks Aug 8 11:19:41.173: INFO: Got endpoints: latency-svc-btlks [966.733407ms] Aug 8 11:19:41.269: INFO: Created: latency-svc-fhgsv Aug 8 11:19:41.275: INFO: Got endpoints: latency-svc-fhgsv [1.02940687s] Aug 8 11:19:41.299: INFO: Created: latency-svc-jzb5d Aug 8 11:19:41.335: INFO: Got endpoints: latency-svc-jzb5d [982.818925ms] Aug 8 11:19:41.484: INFO: Created: latency-svc-j6fv9 Aug 8 11:19:41.487: INFO: Got endpoints: latency-svc-j6fv9 [1.091478982s] Aug 8 11:19:41.533: INFO: Created: latency-svc-llsd6 Aug 8 11:19:41.546: INFO: Got endpoints: latency-svc-llsd6 [1.096366499s] Aug 8 11:19:41.564: INFO: Created: latency-svc-qgplx Aug 8 11:19:41.576: INFO: Got endpoints: latency-svc-qgplx [1.050166686s] Aug 8 11:19:41.652: INFO: Created: latency-svc-swkvd Aug 8 11:19:41.660: INFO: Got endpoints: latency-svc-swkvd [1.079960369s] Aug 8 11:19:41.707: INFO: Created: latency-svc-p5c2l Aug 8 11:19:41.721: INFO: Got endpoints: latency-svc-p5c2l [1.067853727s] Aug 8 11:19:41.743: INFO: Created: latency-svc-xqjq8 Aug 8 11:19:41.789: INFO: Got endpoints: latency-svc-xqjq8 [1.062009106s] Aug 8 11:19:41.803: INFO: Created: latency-svc-vwjbf Aug 8 11:19:41.823: INFO: Got endpoints: latency-svc-vwjbf [1.023426204s] Aug 8 11:19:41.858: INFO: Created: latency-svc-8vdrp Aug 8 11:19:41.933: INFO: Got endpoints: latency-svc-8vdrp [1.091916049s] Aug 8 11:19:41.947: INFO: Created: latency-svc-k9xsr Aug 8 11:19:41.961: INFO: Got endpoints: latency-svc-k9xsr [1.083693s] Aug 8 11:19:41.983: INFO: Created: latency-svc-n6t57 Aug 8 11:19:41.998: INFO: Got endpoints: latency-svc-n6t57 [1.030700777s] Aug 8 11:19:42.018: INFO: Created: latency-svc-68wlb Aug 8 11:19:42.065: INFO: Got endpoints: latency-svc-68wlb [1.048773699s] Aug 8 11:19:42.091: INFO: Created: latency-svc-ffx8c Aug 8 11:19:42.100: INFO: Got endpoints: latency-svc-ffx8c [988.935199ms] Aug 8 11:19:42.121: INFO: Created: latency-svc-j94lt Aug 8 11:19:42.131: INFO: Got endpoints: latency-svc-j94lt [958.629229ms] Aug 8 11:19:42.152: INFO: Created: latency-svc-7f65n Aug 8 11:19:42.161: INFO: Got endpoints: latency-svc-7f65n [886.306777ms] Aug 8 11:19:42.223: INFO: Created: latency-svc-8dv6z Aug 8 11:19:42.227: INFO: Got endpoints: latency-svc-8dv6z [892.038375ms] Aug 8 11:19:42.259: INFO: Created: latency-svc-bt7ls Aug 8 11:19:42.276: INFO: Got endpoints: latency-svc-bt7ls [788.416957ms] Aug 8 11:19:42.307: INFO: Created: latency-svc-kxpxb Aug 8 11:19:42.382: INFO: Got endpoints: latency-svc-kxpxb [836.444078ms] Aug 8 11:19:42.385: INFO: Created: latency-svc-692g2 Aug 8 11:19:42.414: INFO: Got endpoints: latency-svc-692g2 [838.554796ms] Aug 8 11:19:42.457: INFO: Created: latency-svc-ll8hc Aug 8 11:19:42.532: INFO: Got endpoints: latency-svc-ll8hc [871.751714ms] Aug 8 11:19:42.553: INFO: Created: latency-svc-dhjkk Aug 8 11:19:42.565: INFO: Got endpoints: latency-svc-dhjkk [844.049432ms] Aug 8 11:19:42.591: INFO: Created: latency-svc-lfcv6 Aug 8 11:19:42.613: INFO: Got endpoints: latency-svc-lfcv6 [823.914013ms] Aug 8 11:19:42.682: INFO: Created: latency-svc-8fl2k Aug 8 11:19:42.727: INFO: Got endpoints: latency-svc-8fl2k [903.683169ms] Aug 8 11:19:42.729: INFO: Created: latency-svc-sc4mw Aug 8 11:19:42.752: INFO: Got endpoints: latency-svc-sc4mw [818.655141ms] Aug 8 11:19:42.826: INFO: Created: latency-svc-bk6zs Aug 8 11:19:42.836: INFO: Got endpoints: latency-svc-bk6zs [874.64915ms] Aug 8 11:19:42.865: INFO: Created: latency-svc-h6f2n Aug 8 11:19:42.903: INFO: Got endpoints: latency-svc-h6f2n [905.17923ms] Aug 8 11:19:42.982: INFO: Created: latency-svc-578km Aug 8 11:19:42.984: INFO: Got endpoints: latency-svc-578km [919.185892ms] Aug 8 11:19:43.035: INFO: Created: latency-svc-cvf7d Aug 8 11:19:43.046: INFO: Got endpoints: latency-svc-cvf7d [946.213746ms] Aug 8 11:19:43.069: INFO: Created: latency-svc-s5x48 Aug 8 11:19:43.111: INFO: Got endpoints: latency-svc-s5x48 [979.98588ms] Aug 8 11:19:43.165: INFO: Created: latency-svc-zjhl7 Aug 8 11:19:43.173: INFO: Got endpoints: latency-svc-zjhl7 [1.011798801s] Aug 8 11:19:43.200: INFO: Created: latency-svc-2p2m4 Aug 8 11:19:43.262: INFO: Got endpoints: latency-svc-2p2m4 [1.035032621s] Aug 8 11:19:43.279: INFO: Created: latency-svc-2mhnr Aug 8 11:19:43.309: INFO: Got endpoints: latency-svc-2mhnr [1.032864026s] Aug 8 11:19:43.394: INFO: Created: latency-svc-9lbwd Aug 8 11:19:43.408: INFO: Got endpoints: latency-svc-9lbwd [1.025792483s] Aug 8 11:19:43.429: INFO: Created: latency-svc-dqknx Aug 8 11:19:43.439: INFO: Got endpoints: latency-svc-dqknx [1.024496042s] Aug 8 11:19:43.482: INFO: Created: latency-svc-2b7lh Aug 8 11:19:43.598: INFO: Got endpoints: latency-svc-2b7lh [1.065847666s] Aug 8 11:19:43.602: INFO: Created: latency-svc-mjgd2 Aug 8 11:19:43.607: INFO: Got endpoints: latency-svc-mjgd2 [1.04189925s] Aug 8 11:19:43.663: INFO: Created: latency-svc-f2bjv Aug 8 11:19:43.673: INFO: Got endpoints: latency-svc-f2bjv [1.059668046s] Aug 8 11:19:43.761: INFO: Created: latency-svc-6ctcc Aug 8 11:19:43.764: INFO: Got endpoints: latency-svc-6ctcc [1.037001188s] Aug 8 11:19:43.806: INFO: Created: latency-svc-r5rbs Aug 8 11:19:43.831: INFO: Got endpoints: latency-svc-r5rbs [1.07887279s] Aug 8 11:19:43.905: INFO: Created: latency-svc-2tppz Aug 8 11:19:43.909: INFO: Got endpoints: latency-svc-2tppz [1.073194057s] Aug 8 11:19:43.939: INFO: Created: latency-svc-x6p5v Aug 8 11:19:43.957: INFO: Got endpoints: latency-svc-x6p5v [1.054467594s] Aug 8 11:19:43.975: INFO: Created: latency-svc-zxrc7 Aug 8 11:19:43.988: INFO: Got endpoints: latency-svc-zxrc7 [1.003154436s] Aug 8 11:19:44.059: INFO: Created: latency-svc-djmj6 Aug 8 11:19:44.063: INFO: Got endpoints: latency-svc-djmj6 [1.016549864s] Aug 8 11:19:44.094: INFO: Created: latency-svc-hdvr9 Aug 8 11:19:44.102: INFO: Got endpoints: latency-svc-hdvr9 [990.484559ms] Aug 8 11:19:44.124: INFO: Created: latency-svc-25rvt Aug 8 11:19:44.132: INFO: Got endpoints: latency-svc-25rvt [959.423446ms] Aug 8 11:19:44.155: INFO: Created: latency-svc-gpkq5 Aug 8 11:19:44.197: INFO: Got endpoints: latency-svc-gpkq5 [934.310613ms] Aug 8 11:19:44.208: INFO: Created: latency-svc-mn7dw Aug 8 11:19:44.223: INFO: Got endpoints: latency-svc-mn7dw [914.570945ms] Aug 8 11:19:44.263: INFO: Created: latency-svc-tm6qn Aug 8 11:19:44.292: INFO: Got endpoints: latency-svc-tm6qn [883.485474ms] Aug 8 11:19:44.353: INFO: Created: latency-svc-89s6f Aug 8 11:19:44.370: INFO: Got endpoints: latency-svc-89s6f [930.931942ms] Aug 8 11:19:44.430: INFO: Created: latency-svc-xn7xb Aug 8 11:19:44.496: INFO: Got endpoints: latency-svc-xn7xb [898.621778ms] Aug 8 11:19:44.514: INFO: Created: latency-svc-d856m Aug 8 11:19:44.530: INFO: Got endpoints: latency-svc-d856m [923.210305ms] Aug 8 11:19:44.550: INFO: Created: latency-svc-t5vlp Aug 8 11:19:44.560: INFO: Got endpoints: latency-svc-t5vlp [887.413647ms] Aug 8 11:19:44.580: INFO: Created: latency-svc-q78qc Aug 8 11:19:44.591: INFO: Got endpoints: latency-svc-q78qc [826.525697ms] Aug 8 11:19:44.652: INFO: Created: latency-svc-kz8rw Aug 8 11:19:44.655: INFO: Got endpoints: latency-svc-kz8rw [823.695819ms] Aug 8 11:19:44.682: INFO: Created: latency-svc-fdmrb Aug 8 11:19:44.694: INFO: Got endpoints: latency-svc-fdmrb [784.099449ms] Aug 8 11:19:44.712: INFO: Created: latency-svc-cbh6k Aug 8 11:19:44.730: INFO: Got endpoints: latency-svc-cbh6k [772.299247ms] Aug 8 11:19:44.820: INFO: Created: latency-svc-jnbtn Aug 8 11:19:44.823: INFO: Got endpoints: latency-svc-jnbtn [834.952378ms] Aug 8 11:19:44.850: INFO: Created: latency-svc-89zfl Aug 8 11:19:44.863: INFO: Got endpoints: latency-svc-89zfl [800.103893ms] Aug 8 11:19:44.910: INFO: Created: latency-svc-w7sks Aug 8 11:19:45.024: INFO: Got endpoints: latency-svc-w7sks [921.700328ms] Aug 8 11:19:45.027: INFO: Created: latency-svc-8852h Aug 8 11:19:45.031: INFO: Got endpoints: latency-svc-8852h [898.635727ms] Aug 8 11:19:45.054: INFO: Created: latency-svc-kmtxg Aug 8 11:19:45.067: INFO: Got endpoints: latency-svc-kmtxg [870.296608ms] Aug 8 11:19:45.084: INFO: Created: latency-svc-9mnpt Aug 8 11:19:45.107: INFO: Got endpoints: latency-svc-9mnpt [883.973585ms] Aug 8 11:19:45.107: INFO: Latencies: [39.150604ms 128.482493ms 194.361976ms 269.545828ms 338.650544ms 412.935794ms 447.044742ms 477.629898ms 547.245671ms 581.226995ms 644.272522ms 725.363485ms 754.551813ms 761.550514ms 772.299247ms 778.219871ms 778.363792ms 781.645073ms 784.099449ms 787.642854ms 788.416957ms 790.983579ms 792.052178ms 793.459671ms 793.588932ms 796.261685ms 796.85034ms 800.103893ms 800.234526ms 807.007442ms 808.071238ms 808.648946ms 808.811682ms 812.540876ms 813.216186ms 813.245643ms 815.035203ms 815.208051ms 817.183861ms 817.204803ms 818.09814ms 818.655141ms 819.68237ms 820.191267ms 823.461342ms 823.695819ms 823.914013ms 824.748753ms 826.525697ms 828.681168ms 829.967997ms 830.297475ms 832.535111ms 834.037456ms 834.952378ms 836.444078ms 837.604484ms 837.759546ms 838.554796ms 840.704755ms 844.049432ms 845.03264ms 845.507273ms 846.122152ms 847.107483ms 849.080168ms 849.537265ms 850.118122ms 850.165909ms 850.580672ms 852.402407ms 856.076445ms 856.816513ms 859.220697ms 861.023ms 861.593278ms 861.628835ms 862.279518ms 868.212124ms 868.864612ms 869.249077ms 869.332774ms 870.296608ms 871.751714ms 873.629632ms 874.64915ms 878.29118ms 878.445058ms 879.918435ms 880.320666ms 880.322337ms 880.744435ms 881.402821ms 881.575358ms 881.690726ms 883.485474ms 883.973585ms 885.737841ms 886.306777ms 887.413647ms 888.446239ms 889.95077ms 891.393409ms 891.887989ms 892.038375ms 895.28183ms 895.861837ms 896.985319ms 897.125008ms 898.159218ms 898.621778ms 898.635727ms 900.59925ms 902.586574ms 903.683169ms 904.77465ms 905.17923ms 905.601918ms 906.27665ms 907.63842ms 910.304829ms 910.85239ms 910.865619ms 914.570945ms 915.624265ms 916.57959ms 919.185892ms 921.700328ms 922.162062ms 923.210305ms 923.56819ms 924.965244ms 925.581619ms 926.394805ms 927.389712ms 930.931942ms 932.436113ms 934.310613ms 935.364198ms 939.077232ms 939.46039ms 939.882492ms 940.26346ms 940.49304ms 940.776642ms 945.853318ms 946.213746ms 946.285988ms 950.425309ms 951.792351ms 951.875105ms 952.015685ms 955.080987ms 955.413272ms 956.47691ms 956.757246ms 958.457158ms 958.629229ms 959.423446ms 962.411474ms 966.733407ms 972.709921ms 974.748799ms 977.396531ms 979.98588ms 982.818925ms 986.306431ms 988.935199ms 990.484559ms 993.875047ms 1.003154436s 1.005814776s 1.006332111s 1.011798801s 1.016549864s 1.018444501s 1.023426204s 1.024496042s 1.025792483s 1.02940687s 1.030700777s 1.032864026s 1.034130346s 1.035032621s 1.037001188s 1.04189925s 1.048773699s 1.050166686s 1.054467594s 1.059668046s 1.062009106s 1.065847666s 1.067853727s 1.073194057s 1.07887279s 1.079960369s 1.083693s 1.091478982s 1.091916049s 1.096366499s] Aug 8 11:19:45.107: INFO: 50 %ile: 888.446239ms Aug 8 11:19:45.107: INFO: 90 %ile: 1.030700777s Aug 8 11:19:45.107: INFO: 99 %ile: 1.091916049s Aug 8 11:19:45.107: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:19:45.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-hrdmm" for this suite. Aug 8 11:20:09.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:20:09.218: INFO: namespace: e2e-tests-svc-latency-hrdmm, resource: bindings, ignored listing per whitelist Aug 8 11:20:09.241: INFO: namespace e2e-tests-svc-latency-hrdmm deletion completed in 24.11897622s • [SLOW TEST:40.524 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:20:09.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zfnl9 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 8 11:20:09.335: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 8 11:20:37.458: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.73:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zfnl9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 11:20:37.458: INFO: >>> kubeConfig: /root/.kube/config I0808 11:20:37.495809 6 log.go:172] (0xc00093b970) (0xc001dac460) Create stream I0808 11:20:37.495840 6 log.go:172] (0xc00093b970) (0xc001dac460) Stream added, broadcasting: 1 I0808 11:20:37.497704 6 log.go:172] (0xc00093b970) Reply frame received for 1 I0808 11:20:37.497754 6 log.go:172] (0xc00093b970) (0xc000e815e0) Create stream I0808 11:20:37.497767 6 log.go:172] (0xc00093b970) (0xc000e815e0) Stream added, broadcasting: 3 I0808 11:20:37.498705 6 log.go:172] (0xc00093b970) Reply frame received for 3 I0808 11:20:37.498755 6 log.go:172] (0xc00093b970) (0xc001b37040) Create stream I0808 11:20:37.498775 6 log.go:172] (0xc00093b970) (0xc001b37040) Stream added, broadcasting: 5 I0808 11:20:37.499521 6 log.go:172] (0xc00093b970) Reply frame received for 5 I0808 11:20:37.590267 6 log.go:172] (0xc00093b970) Data frame received for 3 I0808 11:20:37.590322 6 log.go:172] (0xc000e815e0) (3) Data frame handling I0808 11:20:37.590374 6 log.go:172] (0xc000e815e0) (3) Data frame sent I0808 11:20:37.590413 6 log.go:172] (0xc00093b970) Data frame received for 3 I0808 11:20:37.590434 6 log.go:172] (0xc000e815e0) (3) Data frame handling I0808 11:20:37.590458 6 log.go:172] (0xc00093b970) Data frame received for 5 I0808 11:20:37.590493 6 log.go:172] (0xc001b37040) (5) Data frame handling I0808 11:20:37.592229 6 log.go:172] (0xc00093b970) Data frame received for 1 I0808 11:20:37.592254 6 log.go:172] (0xc001dac460) (1) Data frame handling I0808 11:20:37.592275 6 log.go:172] (0xc001dac460) (1) Data frame sent I0808 11:20:37.592292 6 log.go:172] (0xc00093b970) (0xc001dac460) Stream removed, broadcasting: 1 I0808 11:20:37.592320 6 log.go:172] (0xc00093b970) Go away received I0808 11:20:37.592449 6 log.go:172] (0xc00093b970) (0xc001dac460) Stream removed, broadcasting: 1 I0808 11:20:37.592475 6 log.go:172] (0xc00093b970) (0xc000e815e0) Stream removed, broadcasting: 3 I0808 11:20:37.592485 6 log.go:172] (0xc00093b970) (0xc001b37040) Stream removed, broadcasting: 5 Aug 8 11:20:37.592: INFO: Found all expected endpoints: [netserver-0] Aug 8 11:20:37.595: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.234:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zfnl9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 11:20:37.596: INFO: >>> kubeConfig: /root/.kube/config I0808 11:20:37.630796 6 log.go:172] (0xc00093be40) (0xc001dac8c0) Create stream I0808 11:20:37.630829 6 log.go:172] (0xc00093be40) (0xc001dac8c0) Stream added, broadcasting: 1 I0808 11:20:37.632655 6 log.go:172] (0xc00093be40) Reply frame received for 1 I0808 11:20:37.632696 6 log.go:172] (0xc00093be40) (0xc001dac960) Create stream I0808 11:20:37.632714 6 log.go:172] (0xc00093be40) (0xc001dac960) Stream added, broadcasting: 3 I0808 11:20:37.633773 6 log.go:172] (0xc00093be40) Reply frame received for 3 I0808 11:20:37.633813 6 log.go:172] (0xc00093be40) (0xc001dacaa0) Create stream I0808 11:20:37.633827 6 log.go:172] (0xc00093be40) (0xc001dacaa0) Stream added, broadcasting: 5 I0808 11:20:37.634565 6 log.go:172] (0xc00093be40) Reply frame received for 5 I0808 11:20:37.708904 6 log.go:172] (0xc00093be40) Data frame received for 3 I0808 11:20:37.708963 6 log.go:172] (0xc001dac960) (3) Data frame handling I0808 11:20:37.708982 6 log.go:172] (0xc001dac960) (3) Data frame sent I0808 11:20:37.709059 6 log.go:172] (0xc00093be40) Data frame received for 5 I0808 11:20:37.709085 6 log.go:172] (0xc001dacaa0) (5) Data frame handling I0808 11:20:37.709237 6 log.go:172] (0xc00093be40) Data frame received for 3 I0808 11:20:37.709262 6 log.go:172] (0xc001dac960) (3) Data frame handling I0808 11:20:37.710556 6 log.go:172] (0xc00093be40) Data frame received for 1 I0808 11:20:37.710581 6 log.go:172] (0xc001dac8c0) (1) Data frame handling I0808 11:20:37.710602 6 log.go:172] (0xc001dac8c0) (1) Data frame sent I0808 11:20:37.710734 6 log.go:172] (0xc00093be40) (0xc001dac8c0) Stream removed, broadcasting: 1 I0808 11:20:37.710801 6 log.go:172] (0xc00093be40) Go away received I0808 11:20:37.710849 6 log.go:172] (0xc00093be40) (0xc001dac8c0) Stream removed, broadcasting: 1 I0808 11:20:37.710881 6 log.go:172] (0xc00093be40) (0xc001dac960) Stream removed, broadcasting: 3 I0808 11:20:37.710898 6 log.go:172] (0xc00093be40) (0xc001dacaa0) Stream removed, broadcasting: 5 Aug 8 11:20:37.710: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:20:37.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-zfnl9" for this suite. Aug 8 11:21:01.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:21:01.818: INFO: namespace: e2e-tests-pod-network-test-zfnl9, resource: bindings, ignored listing per whitelist Aug 8 11:21:01.824: INFO: namespace e2e-tests-pod-network-test-zfnl9 deletion completed in 24.11017319s • [SLOW TEST:52.583 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:21:01.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 8 11:21:01.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-fz8t7' Aug 8 11:21:02.038: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 8 11:21:02.038: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Aug 8 11:21:02.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-fz8t7' Aug 8 11:21:02.235: INFO: stderr: "" Aug 8 11:21:02.235: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:21:02.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fz8t7" for this suite. Aug 8 11:21:24.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:21:24.361: INFO: namespace: e2e-tests-kubectl-fz8t7, resource: bindings, ignored listing per whitelist Aug 8 11:21:24.366: INFO: namespace e2e-tests-kubectl-fz8t7 deletion completed in 22.120793029s • [SLOW TEST:22.542 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:21:24.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-4b33c81c-d969-11ea-aaa1-0242ac11000c STEP: Creating secret with name s-test-opt-upd-4b33c896-d969-11ea-aaa1-0242ac11000c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4b33c81c-d969-11ea-aaa1-0242ac11000c STEP: Updating secret s-test-opt-upd-4b33c896-d969-11ea-aaa1-0242ac11000c STEP: Creating secret with name s-test-opt-create-4b33c8cc-d969-11ea-aaa1-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:21:32.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nx9rd" for this suite. Aug 8 11:21:54.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:21:54.661: INFO: namespace: e2e-tests-secrets-nx9rd, resource: bindings, ignored listing per whitelist Aug 8 11:21:54.699: INFO: namespace e2e-tests-secrets-nx9rd deletion completed in 22.08602449s • [SLOW TEST:30.333 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:21:54.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 8 11:22:01.909: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:22:02.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-lvj59" for this suite. Aug 8 11:22:25.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:22:25.048: INFO: namespace: e2e-tests-replicaset-lvj59, resource: bindings, ignored listing per whitelist Aug 8 11:22:25.123: INFO: namespace e2e-tests-replicaset-lvj59 deletion completed in 22.189966195s • [SLOW TEST:30.423 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:22:25.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-dxbb STEP: Creating a pod to test atomic-volume-subpath Aug 8 11:22:25.264: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dxbb" in namespace "e2e-tests-subpath-8xzxh" to be "success or failure" Aug 8 11:22:25.267: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904716ms Aug 8 11:22:27.271: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007168172s Aug 8 11:22:29.285: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021682076s Aug 8 11:22:31.289: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02528791s Aug 8 11:22:33.293: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Running", Reason="", readiness=false. Elapsed: 8.029651151s Aug 8 11:22:35.298: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Running", Reason="", readiness=false. Elapsed: 10.034097512s Aug 8 11:22:37.301: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Running", Reason="", readiness=false. Elapsed: 12.037416893s Aug 8 11:22:39.306: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Running", Reason="", readiness=false. Elapsed: 14.04184163s Aug 8 11:22:41.310: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Running", Reason="", readiness=false. Elapsed: 16.04625773s Aug 8 11:22:43.315: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Running", Reason="", readiness=false. Elapsed: 18.050802636s Aug 8 11:22:45.319: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Running", Reason="", readiness=false. Elapsed: 20.055166084s Aug 8 11:22:47.324: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Running", Reason="", readiness=false. Elapsed: 22.059842547s Aug 8 11:22:49.328: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Running", Reason="", readiness=false. Elapsed: 24.064365454s Aug 8 11:22:51.333: INFO: Pod "pod-subpath-test-configmap-dxbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.069114628s STEP: Saw pod success Aug 8 11:22:51.333: INFO: Pod "pod-subpath-test-configmap-dxbb" satisfied condition "success or failure" Aug 8 11:22:51.336: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-dxbb container test-container-subpath-configmap-dxbb: STEP: delete the pod Aug 8 11:22:51.373: INFO: Waiting for pod pod-subpath-test-configmap-dxbb to disappear Aug 8 11:22:51.386: INFO: Pod pod-subpath-test-configmap-dxbb no longer exists STEP: Deleting pod pod-subpath-test-configmap-dxbb Aug 8 11:22:51.386: INFO: Deleting pod "pod-subpath-test-configmap-dxbb" in namespace "e2e-tests-subpath-8xzxh" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:22:51.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-8xzxh" for this suite. Aug 8 11:22:57.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:22:57.463: INFO: namespace: e2e-tests-subpath-8xzxh, resource: bindings, ignored listing per whitelist Aug 8 11:22:57.483: INFO: namespace e2e-tests-subpath-8xzxh deletion completed in 6.091409394s • [SLOW TEST:32.360 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:22:57.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 8 11:22:57.585: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 8 11:23:02.590: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:23:03.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-pqnsq" for this suite. Aug 8 11:23:09.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:23:09.701: INFO: namespace: e2e-tests-replication-controller-pqnsq, resource: bindings, ignored listing per whitelist Aug 8 11:23:09.764: INFO: namespace e2e-tests-replication-controller-pqnsq deletion completed in 6.089635928s • [SLOW TEST:12.281 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:23:09.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Aug 8 11:23:14.245: INFO: Pod pod-hostip-8a386d78-d969-11ea-aaa1-0242ac11000c has hostIP: 172.18.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:23:14.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ng5wx" for this suite. Aug 8 11:23:36.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:23:36.315: INFO: namespace: e2e-tests-pods-ng5wx, resource: bindings, ignored listing per whitelist Aug 8 11:23:36.366: INFO: namespace e2e-tests-pods-ng5wx deletion completed in 22.119356054s • [SLOW TEST:26.602 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:23:36.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Aug 8 11:23:36.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:38.120: INFO: stderr: "" Aug 8 11:23:38.120: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 8 11:23:38.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:38.245: INFO: stderr: "" Aug 8 11:23:38.245: INFO: stdout: "update-demo-nautilus-nkt7r update-demo-nautilus-xxxdw " Aug 8 11:23:38.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkt7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:39.051: INFO: stderr: "" Aug 8 11:23:39.051: INFO: stdout: "" Aug 8 11:23:39.051: INFO: update-demo-nautilus-nkt7r is created but not running Aug 8 11:23:44.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:44.372: INFO: stderr: "" Aug 8 11:23:44.372: INFO: stdout: "update-demo-nautilus-nkt7r update-demo-nautilus-xxxdw " Aug 8 11:23:44.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkt7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:44.489: INFO: stderr: "" Aug 8 11:23:44.489: INFO: stdout: "true" Aug 8 11:23:44.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkt7r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:44.584: INFO: stderr: "" Aug 8 11:23:44.584: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 8 11:23:44.584: INFO: validating pod update-demo-nautilus-nkt7r Aug 8 11:23:44.589: INFO: got data: { "image": "nautilus.jpg" } Aug 8 11:23:44.589: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 8 11:23:44.589: INFO: update-demo-nautilus-nkt7r is verified up and running Aug 8 11:23:44.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xxxdw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:44.695: INFO: stderr: "" Aug 8 11:23:44.695: INFO: stdout: "true" Aug 8 11:23:44.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xxxdw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:44.803: INFO: stderr: "" Aug 8 11:23:44.803: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 8 11:23:44.803: INFO: validating pod update-demo-nautilus-xxxdw Aug 8 11:23:44.807: INFO: got data: { "image": "nautilus.jpg" } Aug 8 11:23:44.807: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 8 11:23:44.807: INFO: update-demo-nautilus-xxxdw is verified up and running STEP: scaling down the replication controller Aug 8 11:23:44.809: INFO: scanned /root for discovery docs: Aug 8 11:23:44.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:45.964: INFO: stderr: "" Aug 8 11:23:45.964: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 8 11:23:45.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:46.077: INFO: stderr: "" Aug 8 11:23:46.077: INFO: stdout: "update-demo-nautilus-nkt7r update-demo-nautilus-xxxdw " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 8 11:23:51.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:51.176: INFO: stderr: "" Aug 8 11:23:51.176: INFO: stdout: "update-demo-nautilus-nkt7r " Aug 8 11:23:51.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkt7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:51.264: INFO: stderr: "" Aug 8 11:23:51.264: INFO: stdout: "true" Aug 8 11:23:51.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkt7r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:51.371: INFO: stderr: "" Aug 8 11:23:51.371: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 8 11:23:51.371: INFO: validating pod update-demo-nautilus-nkt7r Aug 8 11:23:51.374: INFO: got data: { "image": "nautilus.jpg" } Aug 8 11:23:51.374: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 8 11:23:51.374: INFO: update-demo-nautilus-nkt7r is verified up and running STEP: scaling up the replication controller Aug 8 11:23:51.376: INFO: scanned /root for discovery docs: Aug 8 11:23:51.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:52.537: INFO: stderr: "" Aug 8 11:23:52.537: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 8 11:23:52.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:52.640: INFO: stderr: "" Aug 8 11:23:52.640: INFO: stdout: "update-demo-nautilus-f5qbb update-demo-nautilus-nkt7r " Aug 8 11:23:52.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f5qbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:52.737: INFO: stderr: "" Aug 8 11:23:52.737: INFO: stdout: "" Aug 8 11:23:52.737: INFO: update-demo-nautilus-f5qbb is created but not running Aug 8 11:23:57.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:57.856: INFO: stderr: "" Aug 8 11:23:57.856: INFO: stdout: "update-demo-nautilus-f5qbb update-demo-nautilus-nkt7r " Aug 8 11:23:57.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f5qbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:57.947: INFO: stderr: "" Aug 8 11:23:57.947: INFO: stdout: "true" Aug 8 11:23:57.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f5qbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:58.054: INFO: stderr: "" Aug 8 11:23:58.054: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 8 11:23:58.054: INFO: validating pod update-demo-nautilus-f5qbb Aug 8 11:23:58.059: INFO: got data: { "image": "nautilus.jpg" } Aug 8 11:23:58.059: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 8 11:23:58.059: INFO: update-demo-nautilus-f5qbb is verified up and running Aug 8 11:23:58.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkt7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:58.157: INFO: stderr: "" Aug 8 11:23:58.157: INFO: stdout: "true" Aug 8 11:23:58.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkt7r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:58.255: INFO: stderr: "" Aug 8 11:23:58.255: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 8 11:23:58.255: INFO: validating pod update-demo-nautilus-nkt7r Aug 8 11:23:58.279: INFO: got data: { "image": "nautilus.jpg" } Aug 8 11:23:58.279: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 8 11:23:58.279: INFO: update-demo-nautilus-nkt7r is verified up and running STEP: using delete to clean up resources Aug 8 11:23:58.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:58.415: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 8 11:23:58.415: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 8 11:23:58.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-7rvb4' Aug 8 11:23:58.557: INFO: stderr: "No resources found.\n" Aug 8 11:23:58.557: INFO: stdout: "" Aug 8 11:23:58.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-7rvb4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 8 11:23:58.673: INFO: stderr: "" Aug 8 11:23:58.673: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:23:58.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7rvb4" for this suite. Aug 8 11:24:20.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:24:20.730: INFO: namespace: e2e-tests-kubectl-7rvb4, resource: bindings, ignored listing per whitelist Aug 8 11:24:20.784: INFO: namespace e2e-tests-kubectl-7rvb4 deletion completed in 22.107896595s • [SLOW TEST:44.418 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:24:20.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-5sn4b STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 8 11:24:20.969: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 8 11:24:43.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.243:8080/dial?request=hostName&protocol=udp&host=10.244.2.79&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-5sn4b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 11:24:43.079: INFO: >>> kubeConfig: /root/.kube/config I0808 11:24:43.118363 6 log.go:172] (0xc0020782c0) (0xc001de3360) Create stream I0808 11:24:43.118394 6 log.go:172] (0xc0020782c0) (0xc001de3360) Stream added, broadcasting: 1 I0808 11:24:43.120627 6 log.go:172] (0xc0020782c0) Reply frame received for 1 I0808 11:24:43.120655 6 log.go:172] (0xc0020782c0) (0xc00203b860) Create stream I0808 11:24:43.120666 6 log.go:172] (0xc0020782c0) (0xc00203b860) Stream added, broadcasting: 3 I0808 11:24:43.121795 6 log.go:172] (0xc0020782c0) Reply frame received for 3 I0808 11:24:43.121852 6 log.go:172] (0xc0020782c0) (0xc00203b9a0) Create stream I0808 11:24:43.121873 6 log.go:172] (0xc0020782c0) (0xc00203b9a0) Stream added, broadcasting: 5 I0808 11:24:43.122800 6 log.go:172] (0xc0020782c0) Reply frame received for 5 I0808 11:24:43.182870 6 log.go:172] (0xc0020782c0) Data frame received for 3 I0808 11:24:43.182971 6 log.go:172] (0xc00203b860) (3) Data frame handling I0808 11:24:43.183005 6 log.go:172] (0xc00203b860) (3) Data frame sent I0808 11:24:43.183879 6 log.go:172] (0xc0020782c0) Data frame received for 5 I0808 11:24:43.183916 6 log.go:172] (0xc00203b9a0) (5) Data frame handling I0808 11:24:43.184125 6 log.go:172] (0xc0020782c0) Data frame received for 3 I0808 11:24:43.184154 6 log.go:172] (0xc00203b860) (3) Data frame handling I0808 11:24:43.186325 6 log.go:172] (0xc0020782c0) Data frame received for 1 I0808 11:24:43.186359 6 log.go:172] (0xc001de3360) (1) Data frame handling I0808 11:24:43.186376 6 log.go:172] (0xc001de3360) (1) Data frame sent I0808 11:24:43.186390 6 log.go:172] (0xc0020782c0) (0xc001de3360) Stream removed, broadcasting: 1 I0808 11:24:43.186464 6 log.go:172] (0xc0020782c0) (0xc001de3360) Stream removed, broadcasting: 1 I0808 11:24:43.186477 6 log.go:172] (0xc0020782c0) (0xc00203b860) Stream removed, broadcasting: 3 I0808 11:24:43.186568 6 log.go:172] (0xc0020782c0) Go away received I0808 11:24:43.186614 6 log.go:172] (0xc0020782c0) (0xc00203b9a0) Stream removed, broadcasting: 5 Aug 8 11:24:43.186: INFO: Waiting for endpoints: map[] Aug 8 11:24:43.190: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.243:8080/dial?request=hostName&protocol=udp&host=10.244.1.242&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-5sn4b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 8 11:24:43.190: INFO: >>> kubeConfig: /root/.kube/config I0808 11:24:43.218628 6 log.go:172] (0xc000015ce0) (0xc00203be00) Create stream I0808 11:24:43.218671 6 log.go:172] (0xc000015ce0) (0xc00203be00) Stream added, broadcasting: 1 I0808 11:24:43.220539 6 log.go:172] (0xc000015ce0) Reply frame received for 1 I0808 11:24:43.220602 6 log.go:172] (0xc000015ce0) (0xc00203bea0) Create stream I0808 11:24:43.220624 6 log.go:172] (0xc000015ce0) (0xc00203bea0) Stream added, broadcasting: 3 I0808 11:24:43.221902 6 log.go:172] (0xc000015ce0) Reply frame received for 3 I0808 11:24:43.221943 6 log.go:172] (0xc000015ce0) (0xc0019c7e00) Create stream I0808 11:24:43.221958 6 log.go:172] (0xc000015ce0) (0xc0019c7e00) Stream added, broadcasting: 5 I0808 11:24:43.222958 6 log.go:172] (0xc000015ce0) Reply frame received for 5 I0808 11:24:43.285153 6 log.go:172] (0xc000015ce0) Data frame received for 3 I0808 11:24:43.285175 6 log.go:172] (0xc00203bea0) (3) Data frame handling I0808 11:24:43.285185 6 log.go:172] (0xc00203bea0) (3) Data frame sent I0808 11:24:43.285685 6 log.go:172] (0xc000015ce0) Data frame received for 5 I0808 11:24:43.285743 6 log.go:172] (0xc0019c7e00) (5) Data frame handling I0808 11:24:43.285836 6 log.go:172] (0xc000015ce0) Data frame received for 3 I0808 11:24:43.285858 6 log.go:172] (0xc00203bea0) (3) Data frame handling I0808 11:24:43.287575 6 log.go:172] (0xc000015ce0) Data frame received for 1 I0808 11:24:43.287616 6 log.go:172] (0xc00203be00) (1) Data frame handling I0808 11:24:43.287652 6 log.go:172] (0xc00203be00) (1) Data frame sent I0808 11:24:43.287670 6 log.go:172] (0xc000015ce0) (0xc00203be00) Stream removed, broadcasting: 1 I0808 11:24:43.287686 6 log.go:172] (0xc000015ce0) Go away received I0808 11:24:43.287905 6 log.go:172] (0xc000015ce0) (0xc00203be00) Stream removed, broadcasting: 1 I0808 11:24:43.287939 6 log.go:172] (0xc000015ce0) (0xc00203bea0) Stream removed, broadcasting: 3 I0808 11:24:43.287948 6 log.go:172] (0xc000015ce0) (0xc0019c7e00) Stream removed, broadcasting: 5 Aug 8 11:24:43.287: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:24:43.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-5sn4b" for this suite. Aug 8 11:25:07.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:25:07.324: INFO: namespace: e2e-tests-pod-network-test-5sn4b, resource: bindings, ignored listing per whitelist Aug 8 11:25:07.377: INFO: namespace e2e-tests-pod-network-test-5sn4b deletion completed in 24.086168664s • [SLOW TEST:46.593 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:25:07.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0808 11:25:17.471266 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 8 11:25:17.471: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:25:17.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-cbvr5" for this suite. Aug 8 11:25:23.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:25:23.549: INFO: namespace: e2e-tests-gc-cbvr5, resource: bindings, ignored listing per whitelist Aug 8 11:25:23.578: INFO: namespace e2e-tests-gc-cbvr5 deletion completed in 6.10333185s • [SLOW TEST:16.200 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:25:23.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:25:49.689: INFO: Container started at 2020-08-08 11:25:26 +0000 UTC, pod became ready at 2020-08-08 11:25:47 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:25:49.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nfqk9" for this suite. Aug 8 11:26:13.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:26:13.921: INFO: namespace: e2e-tests-container-probe-nfqk9, resource: bindings, ignored listing per whitelist Aug 8 11:26:13.976: INFO: namespace e2e-tests-container-probe-nfqk9 deletion completed in 24.283259434s • [SLOW TEST:50.398 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:26:13.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-f7d55a71-d969-11ea-aaa1-0242ac11000c Aug 8 11:26:14.117: INFO: Pod name my-hostname-basic-f7d55a71-d969-11ea-aaa1-0242ac11000c: Found 0 pods out of 1 Aug 8 11:26:19.122: INFO: Pod name my-hostname-basic-f7d55a71-d969-11ea-aaa1-0242ac11000c: Found 1 pods out of 1 Aug 8 11:26:19.122: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f7d55a71-d969-11ea-aaa1-0242ac11000c" are running Aug 8 11:26:19.124: INFO: Pod "my-hostname-basic-f7d55a71-d969-11ea-aaa1-0242ac11000c-mph5l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-08 11:26:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-08 11:26:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-08 11:26:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-08 11:26:14 +0000 UTC Reason: Message:}]) Aug 8 11:26:19.125: INFO: Trying to dial the pod Aug 8 11:26:24.135: INFO: Controller my-hostname-basic-f7d55a71-d969-11ea-aaa1-0242ac11000c: Got expected result from replica 1 [my-hostname-basic-f7d55a71-d969-11ea-aaa1-0242ac11000c-mph5l]: "my-hostname-basic-f7d55a71-d969-11ea-aaa1-0242ac11000c-mph5l", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:26:24.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-n2b6l" for this suite. Aug 8 11:26:32.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:26:32.165: INFO: namespace: e2e-tests-replication-controller-n2b6l, resource: bindings, ignored listing per whitelist Aug 8 11:26:32.288: INFO: namespace e2e-tests-replication-controller-n2b6l deletion completed in 8.150419121s • [SLOW TEST:18.312 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:26:32.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-vgzjg.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-vgzjg.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-vgzjg.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-vgzjg.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-vgzjg.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-vgzjg.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 8 11:26:38.446: INFO: DNS probes using e2e-tests-dns-vgzjg/dns-test-02b83751-d96a-11ea-aaa1-0242ac11000c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:26:38.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-vgzjg" for this suite. Aug 8 11:26:44.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:26:44.572: INFO: namespace: e2e-tests-dns-vgzjg, resource: bindings, ignored listing per whitelist Aug 8 11:26:44.591: INFO: namespace e2e-tests-dns-vgzjg deletion completed in 6.08211983s • [SLOW TEST:12.302 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:26:44.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:26:44.904: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a2f0a6a-d96a-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-d6kq2" to be "success or failure" Aug 8 11:26:44.924: INFO: Pod "downwardapi-volume-0a2f0a6a-d96a-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.984024ms Aug 8 11:26:46.941: INFO: Pod "downwardapi-volume-0a2f0a6a-d96a-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036258449s Aug 8 11:26:48.989: INFO: Pod "downwardapi-volume-0a2f0a6a-d96a-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084449881s STEP: Saw pod success Aug 8 11:26:48.989: INFO: Pod "downwardapi-volume-0a2f0a6a-d96a-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:26:49.053: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0a2f0a6a-d96a-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:26:49.073: INFO: Waiting for pod downwardapi-volume-0a2f0a6a-d96a-11ea-aaa1-0242ac11000c to disappear Aug 8 11:26:49.089: INFO: Pod downwardapi-volume-0a2f0a6a-d96a-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:26:49.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d6kq2" for this suite. Aug 8 11:26:55.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:26:55.199: INFO: namespace: e2e-tests-projected-d6kq2, resource: bindings, ignored listing per whitelist Aug 8 11:26:55.206: INFO: namespace e2e-tests-projected-d6kq2 deletion completed in 6.113955491s • [SLOW TEST:10.615 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:26:55.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-65xsj Aug 8 11:27:01.407: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-65xsj STEP: checking the pod's current state and verifying that restartCount is present Aug 8 11:27:01.411: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:31:03.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-65xsj" for this suite. Aug 8 11:31:09.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:31:09.170: INFO: namespace: e2e-tests-container-probe-65xsj, resource: bindings, ignored listing per whitelist Aug 8 11:31:09.228: INFO: namespace e2e-tests-container-probe-65xsj deletion completed in 6.102977635s • [SLOW TEST:254.022 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:31:09.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 8 11:31:13.850: INFO: Successfully updated pod "annotationupdatea7cb5fac-d96a-11ea-aaa1-0242ac11000c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:31:17.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-67f5h" for this suite. Aug 8 11:31:39.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:31:39.998: INFO: namespace: e2e-tests-projected-67f5h, resource: bindings, ignored listing per whitelist Aug 8 11:31:40.029: INFO: namespace e2e-tests-projected-67f5h deletion completed in 22.126223469s • [SLOW TEST:30.800 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:31:40.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-rcr6m [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Aug 8 11:31:40.143: INFO: Found 0 stateful pods, waiting for 3 Aug 8 11:31:50.147: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:31:50.147: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:31:50.147: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 8 11:32:00.147: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:32:00.147: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:32:00.147: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 8 11:32:00.174: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 8 11:32:10.310: INFO: Updating stateful set ss2 Aug 8 11:32:10.314: INFO: Waiting for Pod e2e-tests-statefulset-rcr6m/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Aug 8 11:32:21.101: INFO: Found 2 stateful pods, waiting for 3 Aug 8 11:32:31.106: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:32:31.106: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:32:31.106: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 8 11:32:31.130: INFO: Updating stateful set ss2 Aug 8 11:32:31.142: INFO: Waiting for Pod e2e-tests-statefulset-rcr6m/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 8 11:32:41.168: INFO: Updating stateful set ss2 Aug 8 11:32:41.177: INFO: Waiting for StatefulSet e2e-tests-statefulset-rcr6m/ss2 to complete update Aug 8 11:32:41.177: INFO: Waiting for Pod e2e-tests-statefulset-rcr6m/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 8 11:32:51.186: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rcr6m Aug 8 11:32:51.189: INFO: Scaling statefulset ss2 to 0 Aug 8 11:33:11.210: INFO: Waiting for statefulset status.replicas updated to 0 Aug 8 11:33:11.214: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:33:11.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-rcr6m" for this suite. Aug 8 11:33:17.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:33:17.284: INFO: namespace: e2e-tests-statefulset-rcr6m, resource: bindings, ignored listing per whitelist Aug 8 11:33:17.323: INFO: namespace e2e-tests-statefulset-rcr6m deletion completed in 6.088158783s • [SLOW TEST:97.294 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:33:17.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 8 11:33:17.484: INFO: Waiting up to 5m0s for pod "pod-f4303350-d96a-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-jhl6v" to be "success or failure" Aug 8 11:33:17.500: INFO: Pod "pod-f4303350-d96a-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.354542ms Aug 8 11:33:19.504: INFO: Pod "pod-f4303350-d96a-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019959072s Aug 8 11:33:21.508: INFO: Pod "pod-f4303350-d96a-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024389692s STEP: Saw pod success Aug 8 11:33:21.508: INFO: Pod "pod-f4303350-d96a-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:33:21.512: INFO: Trying to get logs from node hunter-worker pod pod-f4303350-d96a-11ea-aaa1-0242ac11000c container test-container: STEP: delete the pod Aug 8 11:33:21.545: INFO: Waiting for pod pod-f4303350-d96a-11ea-aaa1-0242ac11000c to disappear Aug 8 11:33:21.552: INFO: Pod pod-f4303350-d96a-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:33:21.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jhl6v" for this suite. Aug 8 11:33:27.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:33:27.638: INFO: namespace: e2e-tests-emptydir-jhl6v, resource: bindings, ignored listing per whitelist Aug 8 11:33:27.639: INFO: namespace e2e-tests-emptydir-jhl6v deletion completed in 6.083103206s • [SLOW TEST:10.315 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:33:27.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:33:27.747: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa4ca17f-d96a-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-mbt25" to be "success or failure" Aug 8 11:33:27.751: INFO: Pod "downwardapi-volume-fa4ca17f-d96a-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.464294ms Aug 8 11:33:29.755: INFO: Pod "downwardapi-volume-fa4ca17f-d96a-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007449811s Aug 8 11:33:31.759: INFO: Pod "downwardapi-volume-fa4ca17f-d96a-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011191421s STEP: Saw pod success Aug 8 11:33:31.759: INFO: Pod "downwardapi-volume-fa4ca17f-d96a-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:33:31.761: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-fa4ca17f-d96a-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:33:31.803: INFO: Waiting for pod downwardapi-volume-fa4ca17f-d96a-11ea-aaa1-0242ac11000c to disappear Aug 8 11:33:31.810: INFO: Pod downwardapi-volume-fa4ca17f-d96a-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:33:31.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mbt25" for this suite. Aug 8 11:33:37.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:33:37.838: INFO: namespace: e2e-tests-downward-api-mbt25, resource: bindings, ignored listing per whitelist Aug 8 11:33:37.961: INFO: namespace e2e-tests-downward-api-mbt25 deletion completed in 6.147036518s • [SLOW TEST:10.322 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:33:37.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-00808845-d96b-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:33:38.191: INFO: Waiting up to 5m0s for pod "pod-configmaps-0083110e-d96b-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-configmap-tnlvf" to be "success or failure" Aug 8 11:33:38.194: INFO: Pod "pod-configmaps-0083110e-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047321ms Aug 8 11:33:40.198: INFO: Pod "pod-configmaps-0083110e-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007110414s Aug 8 11:33:42.203: INFO: Pod "pod-configmaps-0083110e-d96b-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011345946s STEP: Saw pod success Aug 8 11:33:42.203: INFO: Pod "pod-configmaps-0083110e-d96b-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:33:42.206: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0083110e-d96b-11ea-aaa1-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 8 11:33:42.416: INFO: Waiting for pod pod-configmaps-0083110e-d96b-11ea-aaa1-0242ac11000c to disappear Aug 8 11:33:42.435: INFO: Pod pod-configmaps-0083110e-d96b-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:33:42.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tnlvf" for this suite. Aug 8 11:33:48.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:33:48.494: INFO: namespace: e2e-tests-configmap-tnlvf, resource: bindings, ignored listing per whitelist Aug 8 11:33:48.544: INFO: namespace e2e-tests-configmap-tnlvf deletion completed in 6.10476617s • [SLOW TEST:10.582 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:33:48.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 8 11:33:48.681: INFO: Waiting up to 5m0s for pod "pod-06c7aa02-d96b-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-7xrwf" to be "success or failure" Aug 8 11:33:48.685: INFO: Pod "pod-06c7aa02-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41697ms Aug 8 11:33:50.755: INFO: Pod "pod-06c7aa02-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073943896s Aug 8 11:33:52.759: INFO: Pod "pod-06c7aa02-d96b-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077901872s STEP: Saw pod success Aug 8 11:33:52.759: INFO: Pod "pod-06c7aa02-d96b-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:33:52.762: INFO: Trying to get logs from node hunter-worker2 pod pod-06c7aa02-d96b-11ea-aaa1-0242ac11000c container test-container: STEP: delete the pod Aug 8 11:33:52.782: INFO: Waiting for pod pod-06c7aa02-d96b-11ea-aaa1-0242ac11000c to disappear Aug 8 11:33:52.787: INFO: Pod pod-06c7aa02-d96b-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:33:52.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7xrwf" for this suite. Aug 8 11:33:58.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:33:58.859: INFO: namespace: e2e-tests-emptydir-7xrwf, resource: bindings, ignored listing per whitelist Aug 8 11:33:58.879: INFO: namespace e2e-tests-emptydir-7xrwf deletion completed in 6.088961128s • [SLOW TEST:10.335 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:33:58.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0cebec33-d96b-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:33:59.024: INFO: Waiting up to 5m0s for pod "pod-configmaps-0cedc372-d96b-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-configmap-jwlbr" to be "success or failure" Aug 8 11:33:59.028: INFO: Pod "pod-configmaps-0cedc372-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.374972ms Aug 8 11:34:01.091: INFO: Pod "pod-configmaps-0cedc372-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066570607s Aug 8 11:34:03.097: INFO: Pod "pod-configmaps-0cedc372-d96b-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072717978s STEP: Saw pod success Aug 8 11:34:03.097: INFO: Pod "pod-configmaps-0cedc372-d96b-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:34:03.100: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0cedc372-d96b-11ea-aaa1-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 8 11:34:03.181: INFO: Waiting for pod pod-configmaps-0cedc372-d96b-11ea-aaa1-0242ac11000c to disappear Aug 8 11:34:03.184: INFO: Pod pod-configmaps-0cedc372-d96b-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:34:03.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jwlbr" for this suite. Aug 8 11:34:09.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:34:09.264: INFO: namespace: e2e-tests-configmap-jwlbr, resource: bindings, ignored listing per whitelist Aug 8 11:34:09.301: INFO: namespace e2e-tests-configmap-jwlbr deletion completed in 6.113673589s • [SLOW TEST:10.422 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:34:09.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:34:09.424: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 8 11:34:14.429: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 8 11:34:14.429: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 8 11:34:14.474: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-4vw5x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4vw5x/deployments/test-cleanup-deployment,UID:16232c2f-d96b-11ea-b2c9-0242ac120008,ResourceVersion:5163865,Generation:1,CreationTimestamp:2020-08-08 11:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Aug 8 11:34:14.482: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Aug 8 11:34:14.482: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 8 11:34:14.482: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-4vw5x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4vw5x/replicasets/test-cleanup-controller,UID:13248054-d96b-11ea-b2c9-0242ac120008,ResourceVersion:5163866,Generation:1,CreationTimestamp:2020-08-08 11:34:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 16232c2f-d96b-11ea-b2c9-0242ac120008 0xc001a5aa27 0xc001a5aa28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 8 11:34:14.491: INFO: Pod "test-cleanup-controller-5z8pj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-5z8pj,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-4vw5x,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4vw5x/pods/test-cleanup-controller-5z8pj,UID:13270696-d96b-11ea-b2c9-0242ac120008,ResourceVersion:5163858,Generation:0,CreationTimestamp:2020-08-08 11:34:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 13248054-d96b-11ea-b2c9-0242ac120008 0xc00144da67 0xc00144da68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jkv57 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jkv57,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jkv57 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00144dae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00144db00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:34:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:34:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:34:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:34:09 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.90,StartTime:2020-08-08 11:34:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-08 11:34:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2504470ebef6e3d6a1029584b3125f373c886e0c654c5dc4efe5849e2449b99b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:34:14.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-4vw5x" for this suite. Aug 8 11:34:20.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:34:20.745: INFO: namespace: e2e-tests-deployment-4vw5x, resource: bindings, ignored listing per whitelist Aug 8 11:34:20.753: INFO: namespace e2e-tests-deployment-4vw5x deletion completed in 6.206354153s • [SLOW TEST:11.452 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:34:20.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-19fd59a5-d96b-11ea-aaa1-0242ac11000c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-19fd59a5-d96b-11ea-aaa1-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:34:27.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qq9cq" for this suite. Aug 8 11:34:49.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:34:49.124: INFO: namespace: e2e-tests-configmap-qq9cq, resource: bindings, ignored listing per whitelist Aug 8 11:34:49.155: INFO: namespace e2e-tests-configmap-qq9cq deletion completed in 22.08579446s • [SLOW TEST:28.401 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:34:49.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-crq58 Aug 8 11:34:54.577: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-crq58 STEP: checking the pod's current state and verifying that restartCount is present Aug 8 11:34:54.579: INFO: Initial restart count of pod liveness-exec is 0 Aug 8 11:35:43.102: INFO: Restart count of pod e2e-tests-container-probe-crq58/liveness-exec is now 1 (48.522409681s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:35:43.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-crq58" for this suite. Aug 8 11:35:49.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:35:49.267: INFO: namespace: e2e-tests-container-probe-crq58, resource: bindings, ignored listing per whitelist Aug 8 11:35:49.283: INFO: namespace e2e-tests-container-probe-crq58 deletion completed in 6.160389086s • [SLOW TEST:60.128 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:35:49.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:35:55.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5vdtv" for this suite. Aug 8 11:36:39.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:36:39.550: INFO: namespace: e2e-tests-kubelet-test-5vdtv, resource: bindings, ignored listing per whitelist Aug 8 11:36:39.587: INFO: namespace e2e-tests-kubelet-test-5vdtv deletion completed in 44.083136038s • [SLOW TEST:50.303 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:36:39.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6cb87bbf-d96b-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:36:39.736: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6cb9e73f-d96b-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-cv6xk" to be "success or failure" Aug 8 11:36:39.755: INFO: Pod "pod-projected-configmaps-6cb9e73f-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.994245ms Aug 8 11:36:41.764: INFO: Pod "pod-projected-configmaps-6cb9e73f-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027335042s Aug 8 11:36:43.768: INFO: Pod "pod-projected-configmaps-6cb9e73f-d96b-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03124429s STEP: Saw pod success Aug 8 11:36:43.768: INFO: Pod "pod-projected-configmaps-6cb9e73f-d96b-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:36:43.771: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-6cb9e73f-d96b-11ea-aaa1-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 8 11:36:43.966: INFO: Waiting for pod pod-projected-configmaps-6cb9e73f-d96b-11ea-aaa1-0242ac11000c to disappear Aug 8 11:36:43.970: INFO: Pod pod-projected-configmaps-6cb9e73f-d96b-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:36:43.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cv6xk" for this suite. Aug 8 11:36:50.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:36:50.082: INFO: namespace: e2e-tests-projected-cv6xk, resource: bindings, ignored listing per whitelist Aug 8 11:36:50.138: INFO: namespace e2e-tests-projected-cv6xk deletion completed in 6.164876583s • [SLOW TEST:10.551 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:36:50.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Aug 8 11:36:50.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f8ptj' Aug 8 11:36:53.015: INFO: stderr: "" Aug 8 11:36:53.015: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 8 11:36:54.019: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:36:54.020: INFO: Found 0 / 1 Aug 8 11:36:55.112: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:36:55.112: INFO: Found 0 / 1 Aug 8 11:36:56.019: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:36:56.019: INFO: Found 0 / 1 Aug 8 11:36:57.019: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:36:57.019: INFO: Found 1 / 1 Aug 8 11:36:57.019: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 8 11:36:57.022: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:36:57.022: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 8 11:36:57.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tfdp7 --namespace=e2e-tests-kubectl-f8ptj -p {"metadata":{"annotations":{"x":"y"}}}' Aug 8 11:36:57.133: INFO: stderr: "" Aug 8 11:36:57.133: INFO: stdout: "pod/redis-master-tfdp7 patched\n" STEP: checking annotations Aug 8 11:36:57.135: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:36:57.135: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:36:57.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f8ptj" for this suite. Aug 8 11:37:19.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:37:19.198: INFO: namespace: e2e-tests-kubectl-f8ptj, resource: bindings, ignored listing per whitelist Aug 8 11:37:19.253: INFO: namespace e2e-tests-kubectl-f8ptj deletion completed in 22.114934385s • [SLOW TEST:29.115 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:37:19.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Aug 8 11:37:19.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s78p5' Aug 8 11:37:19.658: INFO: stderr: "" Aug 8 11:37:19.658: INFO: stdout: "pod/pause created\n" Aug 8 11:37:19.658: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 8 11:37:19.658: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-s78p5" to be "running and ready" Aug 8 11:37:19.660: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268187ms Aug 8 11:37:21.664: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006256262s Aug 8 11:37:23.668: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.010305891s Aug 8 11:37:23.668: INFO: Pod "pause" satisfied condition "running and ready" Aug 8 11:37:23.668: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Aug 8 11:37:23.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-s78p5' Aug 8 11:37:23.775: INFO: stderr: "" Aug 8 11:37:23.775: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 8 11:37:23.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-s78p5' Aug 8 11:37:23.879: INFO: stderr: "" Aug 8 11:37:23.879: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 8 11:37:23.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-s78p5' Aug 8 11:37:23.982: INFO: stderr: "" Aug 8 11:37:23.982: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 8 11:37:23.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-s78p5' Aug 8 11:37:24.129: INFO: stderr: "" Aug 8 11:37:24.129: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Aug 8 11:37:24.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s78p5' Aug 8 11:37:24.265: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 8 11:37:24.265: INFO: stdout: "pod \"pause\" force deleted\n" Aug 8 11:37:24.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-s78p5' Aug 8 11:37:24.555: INFO: stderr: "No resources found.\n" Aug 8 11:37:24.555: INFO: stdout: "" Aug 8 11:37:24.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-s78p5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 8 11:37:24.666: INFO: stderr: "" Aug 8 11:37:24.666: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:37:24.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s78p5" for this suite. Aug 8 11:37:30.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:37:30.894: INFO: namespace: e2e-tests-kubectl-s78p5, resource: bindings, ignored listing per whitelist Aug 8 11:37:30.912: INFO: namespace e2e-tests-kubectl-s78p5 deletion completed in 6.242447612s • [SLOW TEST:11.657 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:37:30.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 8 11:37:31.080: INFO: Waiting up to 5m0s for pod "downward-api-8b56e1f3-d96b-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-kj2q6" to be "success or failure" Aug 8 11:37:31.083: INFO: Pod "downward-api-8b56e1f3-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.969638ms Aug 8 11:37:33.230: INFO: Pod "downward-api-8b56e1f3-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150583266s Aug 8 11:37:35.316: INFO: Pod "downward-api-8b56e1f3-d96b-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.236091445s Aug 8 11:37:37.319: INFO: Pod "downward-api-8b56e1f3-d96b-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.239321408s STEP: Saw pod success Aug 8 11:37:37.319: INFO: Pod "downward-api-8b56e1f3-d96b-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:37:37.321: INFO: Trying to get logs from node hunter-worker2 pod downward-api-8b56e1f3-d96b-11ea-aaa1-0242ac11000c container dapi-container: STEP: delete the pod Aug 8 11:37:37.369: INFO: Waiting for pod downward-api-8b56e1f3-d96b-11ea-aaa1-0242ac11000c to disappear Aug 8 11:37:37.373: INFO: Pod downward-api-8b56e1f3-d96b-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:37:37.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kj2q6" for this suite. Aug 8 11:37:43.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:37:43.432: INFO: namespace: e2e-tests-downward-api-kj2q6, resource: bindings, ignored listing per whitelist Aug 8 11:37:43.455: INFO: namespace e2e-tests-downward-api-kj2q6 deletion completed in 6.079670628s • [SLOW TEST:12.543 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:37:43.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 8 11:37:43.579: INFO: Waiting up to 5m0s for pod "downward-api-92cb2535-d96b-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-v5nx6" to be "success or failure" Aug 8 11:37:43.615: INFO: Pod "downward-api-92cb2535-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.694914ms Aug 8 11:37:45.620: INFO: Pod "downward-api-92cb2535-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040800071s Aug 8 11:37:47.624: INFO: Pod "downward-api-92cb2535-d96b-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044948916s STEP: Saw pod success Aug 8 11:37:47.624: INFO: Pod "downward-api-92cb2535-d96b-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:37:47.627: INFO: Trying to get logs from node hunter-worker2 pod downward-api-92cb2535-d96b-11ea-aaa1-0242ac11000c container dapi-container: STEP: delete the pod Aug 8 11:37:47.650: INFO: Waiting for pod downward-api-92cb2535-d96b-11ea-aaa1-0242ac11000c to disappear Aug 8 11:37:47.819: INFO: Pod downward-api-92cb2535-d96b-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:37:47.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v5nx6" for this suite. Aug 8 11:37:53.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:37:53.884: INFO: namespace: e2e-tests-downward-api-v5nx6, resource: bindings, ignored listing per whitelist Aug 8 11:37:53.939: INFO: namespace e2e-tests-downward-api-v5nx6 deletion completed in 6.115832758s • [SLOW TEST:10.483 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:37:53.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 8 11:37:58.595: INFO: Successfully updated pod "labelsupdate9906fa80-d96b-11ea-aaa1-0242ac11000c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:38:00.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z5mgn" for this suite. Aug 8 11:38:22.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:38:22.806: INFO: namespace: e2e-tests-projected-z5mgn, resource: bindings, ignored listing per whitelist Aug 8 11:38:22.850: INFO: namespace e2e-tests-projected-z5mgn deletion completed in 22.233695563s • [SLOW TEST:28.910 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:38:22.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:38:23.582: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"aa88fd68-d96b-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc001937d82), BlockOwnerDeletion:(*bool)(0xc001937d83)}} Aug 8 11:38:23.595: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"aa7ff5d6-d96b-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc000a852f2), BlockOwnerDeletion:(*bool)(0xc000a852f3)}} Aug 8 11:38:23.618: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"aa806eac-d96b-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc0018366d2), BlockOwnerDeletion:(*bool)(0xc0018366d3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:38:28.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-bj2wm" for this suite. Aug 8 11:38:36.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:38:36.729: INFO: namespace: e2e-tests-gc-bj2wm, resource: bindings, ignored listing per whitelist Aug 8 11:38:36.768: INFO: namespace e2e-tests-gc-bj2wm deletion completed in 8.096021872s • [SLOW TEST:13.918 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:38:36.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:38:36.913: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Aug 8 11:38:36.920: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-96lrw/daemonsets","resourceVersion":"5164696"},"items":null} Aug 8 11:38:36.922: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-96lrw/pods","resourceVersion":"5164696"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:38:36.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-96lrw" for this suite. Aug 8 11:38:42.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:38:42.971: INFO: namespace: e2e-tests-daemonsets-96lrw, resource: bindings, ignored listing per whitelist Aug 8 11:38:43.021: INFO: namespace e2e-tests-daemonsets-96lrw deletion completed in 6.087228489s S [SKIPPING] [6.253 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:38:36.913: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:38:43.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 8 11:38:47.180: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-b6482c5b-d96b-11ea-aaa1-0242ac11000c,GenerateName:,Namespace:e2e-tests-events-pmxcj,SelfLink:/api/v1/namespaces/e2e-tests-events-pmxcj/pods/send-events-b6482c5b-d96b-11ea-aaa1-0242ac11000c,UID:b648bf98-d96b-11ea-b2c9-0242ac120008,ResourceVersion:5164733,Generation:0,CreationTimestamp:2020-08-08 11:38:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 113286071,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kxwkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kxwkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-kxwkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023d9d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023d9d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:38:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:38:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:38:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:38:43 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.10,StartTime:2020-08-08 11:38:43 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-08 11:38:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://a7b08165debfbd01ee1486b271fbc0433a7e9b8195a71a57e083ed80feece762}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Aug 8 11:38:49.185: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 8 11:38:51.190: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:38:51.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-pmxcj" for this suite. Aug 8 11:39:29.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:39:29.241: INFO: namespace: e2e-tests-events-pmxcj, resource: bindings, ignored listing per whitelist Aug 8 11:39:29.447: INFO: namespace e2e-tests-events-pmxcj deletion completed in 38.241519844s • [SLOW TEST:46.426 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:39:29.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 8 11:39:29.760: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:39:29.762: INFO: Number of nodes with available pods: 0 Aug 8 11:39:29.762: INFO: Node hunter-worker is running more than one daemon pod Aug 8 11:39:30.767: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:39:30.771: INFO: Number of nodes with available pods: 0 Aug 8 11:39:30.771: INFO: Node hunter-worker is running more than one daemon pod Aug 8 11:39:31.947: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:39:31.950: INFO: Number of nodes with available pods: 0 Aug 8 11:39:31.950: INFO: Node hunter-worker is running more than one daemon pod Aug 8 11:39:32.767: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:39:32.770: INFO: Number of nodes with available pods: 0 Aug 8 11:39:32.770: INFO: Node hunter-worker is running more than one daemon pod Aug 8 11:39:33.833: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:39:33.837: INFO: Number of nodes with available pods: 1 Aug 8 11:39:33.837: INFO: Node hunter-worker is running more than one daemon pod Aug 8 11:39:34.767: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:39:34.770: INFO: Number of nodes with available pods: 2 Aug 8 11:39:34.770: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 8 11:39:34.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:39:34.907: INFO: Number of nodes with available pods: 2 Aug 8 11:39:34.907: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2ww2m, will wait for the garbage collector to delete the pods Aug 8 11:39:36.311: INFO: Deleting DaemonSet.extensions daemon-set took: 20.428642ms Aug 8 11:39:36.511: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.595071ms Aug 8 11:39:39.915: INFO: Number of nodes with available pods: 0 Aug 8 11:39:39.915: INFO: Number of running nodes: 0, number of available pods: 0 Aug 8 11:39:39.918: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2ww2m/daemonsets","resourceVersion":"5164898"},"items":null} Aug 8 11:39:39.921: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2ww2m/pods","resourceVersion":"5164898"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:39:39.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-2ww2m" for this suite. Aug 8 11:39:45.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:39:45.982: INFO: namespace: e2e-tests-daemonsets-2ww2m, resource: bindings, ignored listing per whitelist Aug 8 11:39:46.043: INFO: namespace e2e-tests-daemonsets-2ww2m deletion completed in 6.087182357s • [SLOW TEST:16.596 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:39:46.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-dbd55d63-d96b-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:39:46.205: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dbe0515a-d96b-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-mh6nn" to be "success or failure" Aug 8 11:39:46.215: INFO: Pod "pod-projected-configmaps-dbe0515a-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008831ms Aug 8 11:39:48.219: INFO: Pod "pod-projected-configmaps-dbe0515a-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014400124s Aug 8 11:39:50.223: INFO: Pod "pod-projected-configmaps-dbe0515a-d96b-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.017966248s Aug 8 11:39:52.227: INFO: Pod "pod-projected-configmaps-dbe0515a-d96b-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021912609s STEP: Saw pod success Aug 8 11:39:52.227: INFO: Pod "pod-projected-configmaps-dbe0515a-d96b-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:39:52.229: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-dbe0515a-d96b-11ea-aaa1-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 8 11:39:52.280: INFO: Waiting for pod pod-projected-configmaps-dbe0515a-d96b-11ea-aaa1-0242ac11000c to disappear Aug 8 11:39:52.359: INFO: Pod pod-projected-configmaps-dbe0515a-d96b-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:39:52.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mh6nn" for this suite. Aug 8 11:39:58.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:39:58.441: INFO: namespace: e2e-tests-projected-mh6nn, resource: bindings, ignored listing per whitelist Aug 8 11:39:58.466: INFO: namespace e2e-tests-projected-mh6nn deletion completed in 6.103095842s • [SLOW TEST:12.423 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:39:58.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 8 11:39:58.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-7nxxk' Aug 8 11:39:58.716: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 8 11:39:58.716: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Aug 8 11:40:00.767: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-n5grc] Aug 8 11:40:00.767: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-n5grc" in namespace "e2e-tests-kubectl-7nxxk" to be "running and ready" Aug 8 11:40:00.770: INFO: Pod "e2e-test-nginx-rc-n5grc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.561085ms Aug 8 11:40:02.797: INFO: Pod "e2e-test-nginx-rc-n5grc": Phase="Running", Reason="", readiness=true. Elapsed: 2.030150958s Aug 8 11:40:02.797: INFO: Pod "e2e-test-nginx-rc-n5grc" satisfied condition "running and ready" Aug 8 11:40:02.797: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-n5grc] Aug 8 11:40:02.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7nxxk' Aug 8 11:40:02.915: INFO: stderr: "" Aug 8 11:40:02.915: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Aug 8 11:40:02.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7nxxk' Aug 8 11:40:03.031: INFO: stderr: "" Aug 8 11:40:03.031: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:40:03.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7nxxk" for this suite. Aug 8 11:40:09.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:40:09.082: INFO: namespace: e2e-tests-kubectl-7nxxk, resource: bindings, ignored listing per whitelist Aug 8 11:40:09.137: INFO: namespace e2e-tests-kubectl-7nxxk deletion completed in 6.090768024s • [SLOW TEST:10.670 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:40:09.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 8 11:40:09.258: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 8 11:40:09.266: INFO: Waiting for terminating namespaces to be deleted... Aug 8 11:40:09.269: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 8 11:40:09.273: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 8 11:40:09.273: INFO: Container kube-proxy ready: true, restart count 0 Aug 8 11:40:09.273: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 8 11:40:09.273: INFO: Container kindnet-cni ready: true, restart count 0 Aug 8 11:40:09.273: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 8 11:40:09.278: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 8 11:40:09.278: INFO: Container kindnet-cni ready: true, restart count 0 Aug 8 11:40:09.278: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 8 11:40:09.278: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Aug 8 11:40:09.389: INFO: Pod kindnet-2w5m4 requesting resource cpu=100m on Node hunter-worker Aug 8 11:40:09.389: INFO: Pod kindnet-hpnvh requesting resource cpu=100m on Node hunter-worker2 Aug 8 11:40:09.389: INFO: Pod kube-proxy-8wnps requesting resource cpu=0m on Node hunter-worker Aug 8 11:40:09.389: INFO: Pod kube-proxy-b6f6s requesting resource cpu=0m on Node hunter-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-e9b4e19c-d96b-11ea-aaa1-0242ac11000c.16294817ad33ecba], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-6lnrd/filler-pod-e9b4e19c-d96b-11ea-aaa1-0242ac11000c to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9b4e19c-d96b-11ea-aaa1-0242ac11000c.16294817fecbf4b9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9b4e19c-d96b-11ea-aaa1-0242ac11000c.16294818573a9832], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9b4e19c-d96b-11ea-aaa1-0242ac11000c.162948187304784a], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9b62add-d96b-11ea-aaa1-0242ac11000c.16294817b09b8d1a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-6lnrd/filler-pod-e9b62add-d96b-11ea-aaa1-0242ac11000c to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9b62add-d96b-11ea-aaa1-0242ac11000c.16294818529f80fb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9b62add-d96b-11ea-aaa1-0242ac11000c.16294818933b241b], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9b62add-d96b-11ea-aaa1-0242ac11000c.16294818a2ff3202], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.16294819182c1f71], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:40:16.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-6lnrd" for this suite. Aug 8 11:40:24.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:40:25.036: INFO: namespace: e2e-tests-sched-pred-6lnrd, resource: bindings, ignored listing per whitelist Aug 8 11:40:25.331: INFO: namespace e2e-tests-sched-pred-6lnrd deletion completed in 8.727981397s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:16.194 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:40:25.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:40:30.047: INFO: Waiting up to 5m0s for pod "client-envvars-f602f995-d96b-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-pods-r4l7q" to be "success or failure" Aug 8 11:40:30.053: INFO: Pod "client-envvars-f602f995-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.832785ms Aug 8 11:40:32.242: INFO: Pod "client-envvars-f602f995-d96b-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194737236s Aug 8 11:40:34.246: INFO: Pod "client-envvars-f602f995-d96b-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198999102s STEP: Saw pod success Aug 8 11:40:34.246: INFO: Pod "client-envvars-f602f995-d96b-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:40:34.249: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-f602f995-d96b-11ea-aaa1-0242ac11000c container env3cont: STEP: delete the pod Aug 8 11:40:34.283: INFO: Waiting for pod client-envvars-f602f995-d96b-11ea-aaa1-0242ac11000c to disappear Aug 8 11:40:34.311: INFO: Pod client-envvars-f602f995-d96b-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:40:34.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r4l7q" for this suite. Aug 8 11:41:14.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:41:14.350: INFO: namespace: e2e-tests-pods-r4l7q, resource: bindings, ignored listing per whitelist Aug 8 11:41:14.400: INFO: namespace e2e-tests-pods-r4l7q deletion completed in 40.084496488s • [SLOW TEST:49.068 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:41:14.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:41:18.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-fq2d5" for this suite. Aug 8 11:41:24.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:41:24.854: INFO: namespace: e2e-tests-kubelet-test-fq2d5, resource: bindings, ignored listing per whitelist Aug 8 11:41:24.889: INFO: namespace e2e-tests-kubelet-test-fq2d5 deletion completed in 6.364770182s • [SLOW TEST:10.488 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:41:24.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 8 11:41:29.552: INFO: Successfully updated pod "annotationupdate16c3e19f-d96c-11ea-aaa1-0242ac11000c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:41:33.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sfcjc" for this suite. Aug 8 11:41:55.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:41:55.662: INFO: namespace: e2e-tests-downward-api-sfcjc, resource: bindings, ignored listing per whitelist Aug 8 11:41:55.695: INFO: namespace e2e-tests-downward-api-sfcjc deletion completed in 22.089154163s • [SLOW TEST:30.806 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:41:55.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:41:56.123: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2951862b-d96c-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-p57j9" to be "success or failure" Aug 8 11:41:56.176: INFO: Pod "downwardapi-volume-2951862b-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 52.35977ms Aug 8 11:41:58.254: INFO: Pod "downwardapi-volume-2951862b-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130562992s Aug 8 11:42:00.258: INFO: Pod "downwardapi-volume-2951862b-d96c-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134284041s STEP: Saw pod success Aug 8 11:42:00.258: INFO: Pod "downwardapi-volume-2951862b-d96c-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:42:00.261: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2951862b-d96c-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:42:00.308: INFO: Waiting for pod downwardapi-volume-2951862b-d96c-11ea-aaa1-0242ac11000c to disappear Aug 8 11:42:00.331: INFO: Pod downwardapi-volume-2951862b-d96c-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:42:00.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-p57j9" for this suite. Aug 8 11:42:08.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:42:08.675: INFO: namespace: e2e-tests-downward-api-p57j9, resource: bindings, ignored listing per whitelist Aug 8 11:42:08.677: INFO: namespace e2e-tests-downward-api-p57j9 deletion completed in 8.342258981s • [SLOW TEST:12.982 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:42:08.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0808 11:42:39.388522 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 8 11:42:39.388: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:42:39.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-m92v8" for this suite. Aug 8 11:42:45.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:42:45.561: INFO: namespace: e2e-tests-gc-m92v8, resource: bindings, ignored listing per whitelist Aug 8 11:42:45.585: INFO: namespace e2e-tests-gc-m92v8 deletion completed in 6.193138865s • [SLOW TEST:36.908 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:42:45.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 8 11:42:45.766: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:42:55.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-kpg7s" for this suite. Aug 8 11:43:19.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:43:19.519: INFO: namespace: e2e-tests-init-container-kpg7s, resource: bindings, ignored listing per whitelist Aug 8 11:43:19.547: INFO: namespace e2e-tests-init-container-kpg7s deletion completed in 24.164243581s • [SLOW TEST:33.961 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:43:19.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 8 11:43:24.281: INFO: Successfully updated pod "pod-update-5b20d8d9-d96c-11ea-aaa1-0242ac11000c" STEP: verifying the updated pod is in kubernetes Aug 8 11:43:24.315: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:43:24.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bjzdf" for this suite. Aug 8 11:43:46.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:43:46.404: INFO: namespace: e2e-tests-pods-bjzdf, resource: bindings, ignored listing per whitelist Aug 8 11:43:46.425: INFO: namespace e2e-tests-pods-bjzdf deletion completed in 22.106966157s • [SLOW TEST:26.878 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:43:46.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:43:46.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-zmdml" for this suite. Aug 8 11:43:55.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:43:55.137: INFO: namespace: e2e-tests-services-zmdml, resource: bindings, ignored listing per whitelist Aug 8 11:43:55.170: INFO: namespace e2e-tests-services-zmdml deletion completed in 8.174446488s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:8.745 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:43:55.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-706e394b-d96c-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 11:43:55.437: INFO: Waiting up to 5m0s for pod "pod-secrets-706e9110-d96c-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-secrets-tg454" to be "success or failure" Aug 8 11:43:55.466: INFO: Pod "pod-secrets-706e9110-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.721127ms Aug 8 11:43:57.468: INFO: Pod "pod-secrets-706e9110-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031288536s Aug 8 11:43:59.475: INFO: Pod "pod-secrets-706e9110-d96c-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.03776547s Aug 8 11:44:01.479: INFO: Pod "pod-secrets-706e9110-d96c-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041745174s STEP: Saw pod success Aug 8 11:44:01.479: INFO: Pod "pod-secrets-706e9110-d96c-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:44:01.481: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-706e9110-d96c-11ea-aaa1-0242ac11000c container secret-env-test: STEP: delete the pod Aug 8 11:44:01.544: INFO: Waiting for pod pod-secrets-706e9110-d96c-11ea-aaa1-0242ac11000c to disappear Aug 8 11:44:01.555: INFO: Pod pod-secrets-706e9110-d96c-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:44:01.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tg454" for this suite. Aug 8 11:44:07.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:44:07.578: INFO: namespace: e2e-tests-secrets-tg454, resource: bindings, ignored listing per whitelist Aug 8 11:44:07.642: INFO: namespace e2e-tests-secrets-tg454 deletion completed in 6.083821708s • [SLOW TEST:12.472 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:44:07.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 8 11:44:07.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4h4ln' Aug 8 11:44:08.119: INFO: stderr: "" Aug 8 11:44:08.119: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Aug 8 11:44:08.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4h4ln' Aug 8 11:44:17.437: INFO: stderr: "" Aug 8 11:44:17.437: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:44:17.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4h4ln" for this suite. Aug 8 11:44:23.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:44:23.478: INFO: namespace: e2e-tests-kubectl-4h4ln, resource: bindings, ignored listing per whitelist Aug 8 11:44:23.537: INFO: namespace e2e-tests-kubectl-4h4ln deletion completed in 6.096574174s • [SLOW TEST:15.895 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:44:23.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:44:29.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xpg6s" for this suite. Aug 8 11:45:19.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:45:19.755: INFO: namespace: e2e-tests-kubelet-test-xpg6s, resource: bindings, ignored listing per whitelist Aug 8 11:45:19.854: INFO: namespace e2e-tests-kubelet-test-xpg6s deletion completed in 50.186157468s • [SLOW TEST:56.316 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:45:19.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-a2e48efd-d96c-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 11:45:20.179: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a2e8e6e3-d96c-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-h9kd7" to be "success or failure" Aug 8 11:45:20.191: INFO: Pod "pod-projected-secrets-a2e8e6e3-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.957385ms Aug 8 11:45:22.195: INFO: Pod "pod-projected-secrets-a2e8e6e3-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015628152s Aug 8 11:45:24.199: INFO: Pod "pod-projected-secrets-a2e8e6e3-d96c-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019969484s STEP: Saw pod success Aug 8 11:45:24.200: INFO: Pod "pod-projected-secrets-a2e8e6e3-d96c-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:45:24.203: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-a2e8e6e3-d96c-11ea-aaa1-0242ac11000c container projected-secret-volume-test: STEP: delete the pod Aug 8 11:45:24.282: INFO: Waiting for pod pod-projected-secrets-a2e8e6e3-d96c-11ea-aaa1-0242ac11000c to disappear Aug 8 11:45:24.324: INFO: Pod pod-projected-secrets-a2e8e6e3-d96c-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:45:24.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h9kd7" for this suite. Aug 8 11:45:32.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:45:32.575: INFO: namespace: e2e-tests-projected-h9kd7, resource: bindings, ignored listing per whitelist Aug 8 11:45:32.581: INFO: namespace e2e-tests-projected-h9kd7 deletion completed in 8.252417842s • [SLOW TEST:12.727 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:45:32.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:45:32.697: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 8 11:45:32.755: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 8 11:45:37.761: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 8 11:45:37.761: INFO: Creating deployment "test-rolling-update-deployment" Aug 8 11:45:37.765: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 8 11:45:37.808: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 8 11:45:39.817: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 8 11:45:39.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732483937, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732483937, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732483937, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732483937, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 11:45:41.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732483937, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732483937, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732483937, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732483937, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 11:45:43.823: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 8 11:45:43.831: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-68zmw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68zmw/deployments/test-rolling-update-deployment,UID:ad6eb177-d96c-11ea-b2c9-0242ac120008,ResourceVersion:5166119,Generation:1,CreationTimestamp:2020-08-08 11:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-08 11:45:37 +0000 UTC 2020-08-08 11:45:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-08 11:45:42 +0000 UTC 2020-08-08 11:45:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 8 11:45:43.833: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-68zmw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68zmw/replicasets/test-rolling-update-deployment-75db98fb4c,UID:ad767bf9-d96c-11ea-b2c9-0242ac120008,ResourceVersion:5166110,Generation:1,CreationTimestamp:2020-08-08 11:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ad6eb177-d96c-11ea-b2c9-0242ac120008 0xc0020029a7 0xc0020029a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 8 11:45:43.833: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 8 11:45:43.833: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-68zmw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68zmw/replicasets/test-rolling-update-controller,UID:aa69e416-d96c-11ea-b2c9-0242ac120008,ResourceVersion:5166118,Generation:2,CreationTimestamp:2020-08-08 11:45:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ad6eb177-d96c-11ea-b2c9-0242ac120008 0xc0020028e7 0xc0020028e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 8 11:45:43.836: INFO: Pod "test-rolling-update-deployment-75db98fb4c-7dxnt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-7dxnt,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-68zmw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-68zmw/pods/test-rolling-update-deployment-75db98fb4c-7dxnt,UID:ad79911a-d96c-11ea-b2c9-0242ac120008,ResourceVersion:5166109,Generation:0,CreationTimestamp:2020-08-08 11:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c ad767bf9-d96c-11ea-b2c9-0242ac120008 0xc001ff5377 0xc001ff5378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nzzj7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nzzj7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-nzzj7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff53f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff5410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:45:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:45:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:45:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:45:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.106,StartTime:2020-08-08 11:45:37 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-08 11:45:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://c57949a5ad5753f677a2c74d97f17b3f37de94417e82d4d6f1141dbf4b954a58}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:45:43.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-68zmw" for this suite. Aug 8 11:45:49.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:45:49.906: INFO: namespace: e2e-tests-deployment-68zmw, resource: bindings, ignored listing per whitelist Aug 8 11:45:49.957: INFO: namespace e2e-tests-deployment-68zmw deletion completed in 6.119034817s • [SLOW TEST:17.376 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:45:49.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 8 11:45:50.092: INFO: Waiting up to 5m0s for pod "pod-b4bf1451-d96c-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-5f6bk" to be "success or failure" Aug 8 11:45:50.098: INFO: Pod "pod-b4bf1451-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020084ms Aug 8 11:45:52.186: INFO: Pod "pod-b4bf1451-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094188383s Aug 8 11:45:54.190: INFO: Pod "pod-b4bf1451-d96c-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098454946s STEP: Saw pod success Aug 8 11:45:54.190: INFO: Pod "pod-b4bf1451-d96c-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:45:54.193: INFO: Trying to get logs from node hunter-worker2 pod pod-b4bf1451-d96c-11ea-aaa1-0242ac11000c container test-container: STEP: delete the pod Aug 8 11:45:54.214: INFO: Waiting for pod pod-b4bf1451-d96c-11ea-aaa1-0242ac11000c to disappear Aug 8 11:45:54.217: INFO: Pod pod-b4bf1451-d96c-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:45:54.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5f6bk" for this suite. Aug 8 11:46:00.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:46:00.293: INFO: namespace: e2e-tests-emptydir-5f6bk, resource: bindings, ignored listing per whitelist Aug 8 11:46:00.304: INFO: namespace e2e-tests-emptydir-5f6bk deletion completed in 6.084224425s • [SLOW TEST:10.347 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:46:00.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:46:00.846: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 8 11:46:00.858: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:00.860: INFO: Number of nodes with available pods: 0 Aug 8 11:46:00.860: INFO: Node hunter-worker is running more than one daemon pod Aug 8 11:46:01.865: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:01.869: INFO: Number of nodes with available pods: 0 Aug 8 11:46:01.869: INFO: Node hunter-worker is running more than one daemon pod Aug 8 11:46:03.078: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:03.081: INFO: Number of nodes with available pods: 0 Aug 8 11:46:03.081: INFO: Node hunter-worker is running more than one daemon pod Aug 8 11:46:03.893: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:03.896: INFO: Number of nodes with available pods: 0 Aug 8 11:46:03.896: INFO: Node hunter-worker is running more than one daemon pod Aug 8 11:46:04.864: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:04.866: INFO: Number of nodes with available pods: 1 Aug 8 11:46:04.866: INFO: Node hunter-worker2 is running more than one daemon pod Aug 8 11:46:05.865: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:05.869: INFO: Number of nodes with available pods: 2 Aug 8 11:46:05.869: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 8 11:46:05.942: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:05.942: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:05.948: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:07.061: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:07.061: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:07.065: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:07.953: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:07.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:07.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:08.953: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:08.953: INFO: Pod daemon-set-hh98n is not available Aug 8 11:46:08.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:08.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:09.953: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:09.953: INFO: Pod daemon-set-hh98n is not available Aug 8 11:46:09.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:09.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:10.953: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:10.953: INFO: Pod daemon-set-hh98n is not available Aug 8 11:46:10.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:10.958: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:11.953: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:11.953: INFO: Pod daemon-set-hh98n is not available Aug 8 11:46:11.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:11.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:12.953: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:12.953: INFO: Pod daemon-set-hh98n is not available Aug 8 11:46:12.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:12.959: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:13.953: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:13.953: INFO: Pod daemon-set-hh98n is not available Aug 8 11:46:13.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:13.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:14.977: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:14.977: INFO: Pod daemon-set-hh98n is not available Aug 8 11:46:14.977: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:14.981: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:15.953: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:15.953: INFO: Pod daemon-set-hh98n is not available Aug 8 11:46:15.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:15.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:16.952: INFO: Wrong image for pod: daemon-set-hh98n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:16.952: INFO: Pod daemon-set-hh98n is not available Aug 8 11:46:16.952: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:16.955: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:17.953: INFO: Pod daemon-set-hgn67 is not available Aug 8 11:46:17.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:17.958: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:18.973: INFO: Pod daemon-set-hgn67 is not available Aug 8 11:46:18.973: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:18.976: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:19.953: INFO: Pod daemon-set-hgn67 is not available Aug 8 11:46:19.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:19.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:21.127: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:21.132: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:21.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:21.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:22.953: INFO: Wrong image for pod: daemon-set-s2cfm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 8 11:46:22.953: INFO: Pod daemon-set-s2cfm is not available Aug 8 11:46:22.956: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:23.953: INFO: Pod daemon-set-7nblx is not available Aug 8 11:46:23.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 8 11:46:23.961: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:23.965: INFO: Number of nodes with available pods: 1 Aug 8 11:46:23.965: INFO: Node hunter-worker2 is running more than one daemon pod Aug 8 11:46:25.164: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:25.167: INFO: Number of nodes with available pods: 1 Aug 8 11:46:25.167: INFO: Node hunter-worker2 is running more than one daemon pod Aug 8 11:46:25.970: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:25.973: INFO: Number of nodes with available pods: 1 Aug 8 11:46:25.974: INFO: Node hunter-worker2 is running more than one daemon pod Aug 8 11:46:26.968: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 8 11:46:26.971: INFO: Number of nodes with available pods: 2 Aug 8 11:46:26.971: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4dmcd, will wait for the garbage collector to delete the pods Aug 8 11:46:27.064: INFO: Deleting DaemonSet.extensions daemon-set took: 6.595586ms Aug 8 11:46:27.165: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.299897ms Aug 8 11:46:37.587: INFO: Number of nodes with available pods: 0 Aug 8 11:46:37.587: INFO: Number of running nodes: 0, number of available pods: 0 Aug 8 11:46:37.590: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4dmcd/daemonsets","resourceVersion":"5166354"},"items":null} Aug 8 11:46:37.593: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4dmcd/pods","resourceVersion":"5166354"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:46:37.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4dmcd" for this suite. Aug 8 11:46:43.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:46:43.649: INFO: namespace: e2e-tests-daemonsets-4dmcd, resource: bindings, ignored listing per whitelist Aug 8 11:46:43.722: INFO: namespace e2e-tests-daemonsets-4dmcd deletion completed in 6.115574917s • [SLOW TEST:43.418 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:46:43.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Aug 8 11:46:43.803: INFO: namespace e2e-tests-kubectl-ttwlh Aug 8 11:46:43.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttwlh' Aug 8 11:46:44.064: INFO: stderr: "" Aug 8 11:46:44.064: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 8 11:46:45.106: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:46:45.106: INFO: Found 0 / 1 Aug 8 11:46:46.069: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:46:46.069: INFO: Found 0 / 1 Aug 8 11:46:47.068: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:46:47.068: INFO: Found 0 / 1 Aug 8 11:46:48.069: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:46:48.069: INFO: Found 1 / 1 Aug 8 11:46:48.069: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 8 11:46:48.073: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:46:48.073: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 8 11:46:48.073: INFO: wait on redis-master startup in e2e-tests-kubectl-ttwlh Aug 8 11:46:48.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-q66nn redis-master --namespace=e2e-tests-kubectl-ttwlh' Aug 8 11:46:48.198: INFO: stderr: "" Aug 8 11:46:48.198: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 Aug 11:46:46.875 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Aug 11:46:46.875 # Server started, Redis version 3.2.12\n1:M 08 Aug 11:46:46.875 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Aug 11:46:46.875 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Aug 8 11:46:48.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-ttwlh' Aug 8 11:46:48.386: INFO: stderr: "" Aug 8 11:46:48.386: INFO: stdout: "service/rm2 exposed\n" Aug 8 11:46:48.456: INFO: Service rm2 in namespace e2e-tests-kubectl-ttwlh found. STEP: exposing service Aug 8 11:46:50.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-ttwlh' Aug 8 11:46:50.629: INFO: stderr: "" Aug 8 11:46:50.630: INFO: stdout: "service/rm3 exposed\n" Aug 8 11:46:50.639: INFO: Service rm3 in namespace e2e-tests-kubectl-ttwlh found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:46:52.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ttwlh" for this suite. Aug 8 11:47:16.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:47:16.681: INFO: namespace: e2e-tests-kubectl-ttwlh, resource: bindings, ignored listing per whitelist Aug 8 11:47:16.740: INFO: namespace e2e-tests-kubectl-ttwlh deletion completed in 24.091683645s • [SLOW TEST:33.017 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:47:16.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e87953f4-d96c-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:47:16.862: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e87f44ae-d96c-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-d4bhr" to be "success or failure" Aug 8 11:47:16.873: INFO: Pod "pod-projected-configmaps-e87f44ae-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.459745ms Aug 8 11:47:19.288: INFO: Pod "pod-projected-configmaps-e87f44ae-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.426448555s Aug 8 11:47:21.293: INFO: Pod "pod-projected-configmaps-e87f44ae-d96c-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430531304s STEP: Saw pod success Aug 8 11:47:21.293: INFO: Pod "pod-projected-configmaps-e87f44ae-d96c-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:47:21.294: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e87f44ae-d96c-11ea-aaa1-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 8 11:47:21.402: INFO: Waiting for pod pod-projected-configmaps-e87f44ae-d96c-11ea-aaa1-0242ac11000c to disappear Aug 8 11:47:21.406: INFO: Pod pod-projected-configmaps-e87f44ae-d96c-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:47:21.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d4bhr" for this suite. Aug 8 11:47:27.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:47:27.670: INFO: namespace: e2e-tests-projected-d4bhr, resource: bindings, ignored listing per whitelist Aug 8 11:47:27.759: INFO: namespace e2e-tests-projected-d4bhr deletion completed in 6.349169662s • [SLOW TEST:11.019 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:47:27.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ef8b5c00-d96c-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 11:47:29.010: INFO: Waiting up to 5m0s for pod "pod-secrets-efb58e07-d96c-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-secrets-4wsqs" to be "success or failure" Aug 8 11:47:29.012: INFO: Pod "pod-secrets-efb58e07-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491916ms Aug 8 11:47:31.016: INFO: Pod "pod-secrets-efb58e07-d96c-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006109883s Aug 8 11:47:33.021: INFO: Pod "pod-secrets-efb58e07-d96c-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010873014s STEP: Saw pod success Aug 8 11:47:33.021: INFO: Pod "pod-secrets-efb58e07-d96c-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:47:33.024: INFO: Trying to get logs from node hunter-worker pod pod-secrets-efb58e07-d96c-11ea-aaa1-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 8 11:47:33.298: INFO: Waiting for pod pod-secrets-efb58e07-d96c-11ea-aaa1-0242ac11000c to disappear Aug 8 11:47:33.357: INFO: Pod pod-secrets-efb58e07-d96c-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:47:33.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4wsqs" for this suite. Aug 8 11:47:39.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:47:39.587: INFO: namespace: e2e-tests-secrets-4wsqs, resource: bindings, ignored listing per whitelist Aug 8 11:47:39.641: INFO: namespace e2e-tests-secrets-4wsqs deletion completed in 6.279724511s STEP: Destroying namespace "e2e-tests-secret-namespace-7g9kk" for this suite. Aug 8 11:47:45.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:47:45.703: INFO: namespace: e2e-tests-secret-namespace-7g9kk, resource: bindings, ignored listing per whitelist Aug 8 11:47:45.738: INFO: namespace e2e-tests-secret-namespace-7g9kk deletion completed in 6.097714347s • [SLOW TEST:17.979 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:47:45.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-w8phv [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-w8phv STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-w8phv Aug 8 11:47:46.179: INFO: Found 0 stateful pods, waiting for 1 Aug 8 11:47:56.184: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 8 11:47:56.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8phv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 11:47:56.446: INFO: stderr: "I0808 11:47:56.318334 2044 log.go:172] (0xc0001386e0) (0xc0006d7360) Create stream\nI0808 11:47:56.318397 2044 log.go:172] (0xc0001386e0) (0xc0006d7360) Stream added, broadcasting: 1\nI0808 11:47:56.321417 2044 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0808 11:47:56.321475 2044 log.go:172] (0xc0001386e0) (0xc0005ee000) Create stream\nI0808 11:47:56.321490 2044 log.go:172] (0xc0001386e0) (0xc0005ee000) Stream added, broadcasting: 3\nI0808 11:47:56.322534 2044 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0808 11:47:56.322588 2044 log.go:172] (0xc0001386e0) (0xc0006d7400) Create stream\nI0808 11:47:56.322606 2044 log.go:172] (0xc0001386e0) (0xc0006d7400) Stream added, broadcasting: 5\nI0808 11:47:56.323576 2044 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0808 11:47:56.439188 2044 log.go:172] (0xc0001386e0) Data frame received for 3\nI0808 11:47:56.439249 2044 log.go:172] (0xc0005ee000) (3) Data frame handling\nI0808 11:47:56.439274 2044 log.go:172] (0xc0005ee000) (3) Data frame sent\nI0808 11:47:56.439306 2044 log.go:172] (0xc0001386e0) Data frame received for 5\nI0808 11:47:56.439353 2044 log.go:172] (0xc0006d7400) (5) Data frame handling\nI0808 11:47:56.439389 2044 log.go:172] (0xc0001386e0) Data frame received for 3\nI0808 11:47:56.439426 2044 log.go:172] (0xc0005ee000) (3) Data frame handling\nI0808 11:47:56.441031 2044 log.go:172] (0xc0001386e0) Data frame received for 1\nI0808 11:47:56.441067 2044 log.go:172] (0xc0006d7360) (1) Data frame handling\nI0808 11:47:56.441111 2044 log.go:172] (0xc0006d7360) (1) Data frame sent\nI0808 11:47:56.441132 2044 log.go:172] (0xc0001386e0) (0xc0006d7360) Stream removed, broadcasting: 1\nI0808 11:47:56.441156 2044 log.go:172] (0xc0001386e0) Go away received\nI0808 11:47:56.441453 2044 log.go:172] (0xc0001386e0) (0xc0006d7360) Stream removed, broadcasting: 1\nI0808 11:47:56.441492 2044 log.go:172] (0xc0001386e0) (0xc0005ee000) Stream removed, broadcasting: 3\nI0808 11:47:56.441506 2044 log.go:172] (0xc0001386e0) (0xc0006d7400) Stream removed, broadcasting: 5\n" Aug 8 11:47:56.446: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 11:47:56.446: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 8 11:47:56.450: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 8 11:48:06.455: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 8 11:48:06.455: INFO: Waiting for statefulset status.replicas updated to 0 Aug 8 11:48:06.478: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:06.478: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC }] Aug 8 11:48:06.478: INFO: Aug 8 11:48:06.478: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 8 11:48:07.510: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98823759s Aug 8 11:48:08.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.955662671s Aug 8 11:48:09.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.947102692s Aug 8 11:48:10.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.822737825s Aug 8 11:48:11.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.818149529s Aug 8 11:48:12.659: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.812764059s Aug 8 11:48:13.665: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.807405493s Aug 8 11:48:14.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.801603631s Aug 8 11:48:15.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 796.62939ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-w8phv Aug 8 11:48:16.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8phv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 8 11:48:16.927: INFO: stderr: "I0808 11:48:16.828584 2065 log.go:172] (0xc000162840) (0xc000651400) Create stream\nI0808 11:48:16.828639 2065 log.go:172] (0xc000162840) (0xc000651400) Stream added, broadcasting: 1\nI0808 11:48:16.831264 2065 log.go:172] (0xc000162840) Reply frame received for 1\nI0808 11:48:16.831308 2065 log.go:172] (0xc000162840) (0xc0006b0000) Create stream\nI0808 11:48:16.831319 2065 log.go:172] (0xc000162840) (0xc0006b0000) Stream added, broadcasting: 3\nI0808 11:48:16.832253 2065 log.go:172] (0xc000162840) Reply frame received for 3\nI0808 11:48:16.832285 2065 log.go:172] (0xc000162840) (0xc0006514a0) Create stream\nI0808 11:48:16.832295 2065 log.go:172] (0xc000162840) (0xc0006514a0) Stream added, broadcasting: 5\nI0808 11:48:16.833141 2065 log.go:172] (0xc000162840) Reply frame received for 5\nI0808 11:48:16.917122 2065 log.go:172] (0xc000162840) Data frame received for 5\nI0808 11:48:16.917148 2065 log.go:172] (0xc0006514a0) (5) Data frame handling\nI0808 11:48:16.917166 2065 log.go:172] (0xc000162840) Data frame received for 3\nI0808 11:48:16.917171 2065 log.go:172] (0xc0006b0000) (3) Data frame handling\nI0808 11:48:16.917177 2065 log.go:172] (0xc0006b0000) (3) Data frame sent\nI0808 11:48:16.917181 2065 log.go:172] (0xc000162840) Data frame received for 3\nI0808 11:48:16.917185 2065 log.go:172] (0xc0006b0000) (3) Data frame handling\nI0808 11:48:16.918819 2065 log.go:172] (0xc000162840) Data frame received for 1\nI0808 11:48:16.918837 2065 log.go:172] (0xc000651400) (1) Data frame handling\nI0808 11:48:16.918846 2065 log.go:172] (0xc000651400) (1) Data frame sent\nI0808 11:48:16.918867 2065 log.go:172] (0xc000162840) (0xc000651400) Stream removed, broadcasting: 1\nI0808 11:48:16.918924 2065 log.go:172] (0xc000162840) Go away received\nI0808 11:48:16.919039 2065 log.go:172] (0xc000162840) (0xc000651400) Stream removed, broadcasting: 1\nI0808 11:48:16.919058 2065 log.go:172] (0xc000162840) (0xc0006b0000) Stream removed, broadcasting: 3\nI0808 11:48:16.919077 2065 log.go:172] (0xc000162840) (0xc0006514a0) Stream removed, broadcasting: 5\n" Aug 8 11:48:16.927: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 8 11:48:16.927: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 8 11:48:16.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8phv ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 8 11:48:17.142: INFO: stderr: "I0808 11:48:17.063149 2088 log.go:172] (0xc0008462c0) (0xc00074a640) Create stream\nI0808 11:48:17.063207 2088 log.go:172] (0xc0008462c0) (0xc00074a640) Stream added, broadcasting: 1\nI0808 11:48:17.066002 2088 log.go:172] (0xc0008462c0) Reply frame received for 1\nI0808 11:48:17.066056 2088 log.go:172] (0xc0008462c0) (0xc000680d20) Create stream\nI0808 11:48:17.066071 2088 log.go:172] (0xc0008462c0) (0xc000680d20) Stream added, broadcasting: 3\nI0808 11:48:17.066967 2088 log.go:172] (0xc0008462c0) Reply frame received for 3\nI0808 11:48:17.066999 2088 log.go:172] (0xc0008462c0) (0xc00074a6e0) Create stream\nI0808 11:48:17.067006 2088 log.go:172] (0xc0008462c0) (0xc00074a6e0) Stream added, broadcasting: 5\nI0808 11:48:17.067805 2088 log.go:172] (0xc0008462c0) Reply frame received for 5\nI0808 11:48:17.136706 2088 log.go:172] (0xc0008462c0) Data frame received for 3\nI0808 11:48:17.136857 2088 log.go:172] (0xc0008462c0) Data frame received for 5\nI0808 11:48:17.136913 2088 log.go:172] (0xc00074a6e0) (5) Data frame handling\nI0808 11:48:17.136953 2088 log.go:172] (0xc00074a6e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0808 11:48:17.136976 2088 log.go:172] (0xc000680d20) (3) Data frame handling\nI0808 11:48:17.137038 2088 log.go:172] (0xc000680d20) (3) Data frame sent\nI0808 11:48:17.137057 2088 log.go:172] (0xc0008462c0) Data frame received for 3\nI0808 11:48:17.137070 2088 log.go:172] (0xc0008462c0) Data frame received for 5\nI0808 11:48:17.137089 2088 log.go:172] (0xc00074a6e0) (5) Data frame handling\nI0808 11:48:17.137139 2088 log.go:172] (0xc000680d20) (3) Data frame handling\nI0808 11:48:17.138290 2088 log.go:172] (0xc0008462c0) Data frame received for 1\nI0808 11:48:17.138318 2088 log.go:172] (0xc00074a640) (1) Data frame handling\nI0808 11:48:17.138337 2088 log.go:172] (0xc00074a640) (1) Data frame sent\nI0808 11:48:17.138373 2088 log.go:172] (0xc0008462c0) (0xc00074a640) Stream removed, broadcasting: 1\nI0808 11:48:17.138392 2088 log.go:172] (0xc0008462c0) Go away received\nI0808 11:48:17.138604 2088 log.go:172] (0xc0008462c0) (0xc00074a640) Stream removed, broadcasting: 1\nI0808 11:48:17.138624 2088 log.go:172] (0xc0008462c0) (0xc000680d20) Stream removed, broadcasting: 3\nI0808 11:48:17.138632 2088 log.go:172] (0xc0008462c0) (0xc00074a6e0) Stream removed, broadcasting: 5\n" Aug 8 11:48:17.142: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 8 11:48:17.142: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 8 11:48:17.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8phv ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 8 11:48:17.347: INFO: stderr: "I0808 11:48:17.275136 2111 log.go:172] (0xc000138630) (0xc0009140a0) Create stream\nI0808 11:48:17.275192 2111 log.go:172] (0xc000138630) (0xc0009140a0) Stream added, broadcasting: 1\nI0808 11:48:17.280341 2111 log.go:172] (0xc000138630) Reply frame received for 1\nI0808 11:48:17.280400 2111 log.go:172] (0xc000138630) (0xc00080ad20) Create stream\nI0808 11:48:17.280422 2111 log.go:172] (0xc000138630) (0xc00080ad20) Stream added, broadcasting: 3\nI0808 11:48:17.281608 2111 log.go:172] (0xc000138630) Reply frame received for 3\nI0808 11:48:17.281653 2111 log.go:172] (0xc000138630) (0xc00080adc0) Create stream\nI0808 11:48:17.281666 2111 log.go:172] (0xc000138630) (0xc00080adc0) Stream added, broadcasting: 5\nI0808 11:48:17.282531 2111 log.go:172] (0xc000138630) Reply frame received for 5\nI0808 11:48:17.341534 2111 log.go:172] (0xc000138630) Data frame received for 5\nI0808 11:48:17.341573 2111 log.go:172] (0xc00080adc0) (5) Data frame handling\nI0808 11:48:17.341585 2111 log.go:172] (0xc00080adc0) (5) Data frame sent\nI0808 11:48:17.341592 2111 log.go:172] (0xc000138630) Data frame received for 5\nI0808 11:48:17.341598 2111 log.go:172] (0xc00080adc0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0808 11:48:17.341622 2111 log.go:172] (0xc000138630) Data frame received for 3\nI0808 11:48:17.341630 2111 log.go:172] (0xc00080ad20) (3) Data frame handling\nI0808 11:48:17.341645 2111 log.go:172] (0xc00080ad20) (3) Data frame sent\nI0808 11:48:17.341656 2111 log.go:172] (0xc000138630) Data frame received for 3\nI0808 11:48:17.341664 2111 log.go:172] (0xc00080ad20) (3) Data frame handling\nI0808 11:48:17.343046 2111 log.go:172] (0xc000138630) Data frame received for 1\nI0808 11:48:17.343071 2111 log.go:172] (0xc0009140a0) (1) Data frame handling\nI0808 11:48:17.343083 2111 log.go:172] (0xc0009140a0) (1) Data frame sent\nI0808 11:48:17.343094 2111 log.go:172] (0xc000138630) (0xc0009140a0) Stream removed, broadcasting: 1\nI0808 11:48:17.343116 2111 log.go:172] (0xc000138630) Go away received\nI0808 11:48:17.343312 2111 log.go:172] (0xc000138630) (0xc0009140a0) Stream removed, broadcasting: 1\nI0808 11:48:17.343333 2111 log.go:172] (0xc000138630) (0xc00080ad20) Stream removed, broadcasting: 3\nI0808 11:48:17.343342 2111 log.go:172] (0xc000138630) (0xc00080adc0) Stream removed, broadcasting: 5\n" Aug 8 11:48:17.348: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 8 11:48:17.348: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 8 11:48:17.351: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Aug 8 11:48:27.355: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:48:27.355: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 8 11:48:27.355: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 8 11:48:27.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8phv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 11:48:27.612: INFO: stderr: "I0808 11:48:27.495612 2134 log.go:172] (0xc0006f8420) (0xc0003cd2c0) Create stream\nI0808 11:48:27.495683 2134 log.go:172] (0xc0006f8420) (0xc0003cd2c0) Stream added, broadcasting: 1\nI0808 11:48:27.499203 2134 log.go:172] (0xc0006f8420) Reply frame received for 1\nI0808 11:48:27.499260 2134 log.go:172] (0xc0006f8420) (0xc00000e000) Create stream\nI0808 11:48:27.499287 2134 log.go:172] (0xc0006f8420) (0xc00000e000) Stream added, broadcasting: 3\nI0808 11:48:27.500613 2134 log.go:172] (0xc0006f8420) Reply frame received for 3\nI0808 11:48:27.500653 2134 log.go:172] (0xc0006f8420) (0xc00000e0a0) Create stream\nI0808 11:48:27.500666 2134 log.go:172] (0xc0006f8420) (0xc00000e0a0) Stream added, broadcasting: 5\nI0808 11:48:27.501774 2134 log.go:172] (0xc0006f8420) Reply frame received for 5\nI0808 11:48:27.606524 2134 log.go:172] (0xc0006f8420) Data frame received for 3\nI0808 11:48:27.606555 2134 log.go:172] (0xc00000e000) (3) Data frame handling\nI0808 11:48:27.606587 2134 log.go:172] (0xc00000e000) (3) Data frame sent\nI0808 11:48:27.606601 2134 log.go:172] (0xc0006f8420) Data frame received for 3\nI0808 11:48:27.606609 2134 log.go:172] (0xc00000e000) (3) Data frame handling\nI0808 11:48:27.607056 2134 log.go:172] (0xc0006f8420) Data frame received for 5\nI0808 11:48:27.607093 2134 log.go:172] (0xc00000e0a0) (5) Data frame handling\nI0808 11:48:27.608460 2134 log.go:172] (0xc0006f8420) Data frame received for 1\nI0808 11:48:27.608497 2134 log.go:172] (0xc0003cd2c0) (1) Data frame handling\nI0808 11:48:27.608516 2134 log.go:172] (0xc0003cd2c0) (1) Data frame sent\nI0808 11:48:27.608535 2134 log.go:172] (0xc0006f8420) (0xc0003cd2c0) Stream removed, broadcasting: 1\nI0808 11:48:27.608567 2134 log.go:172] (0xc0006f8420) Go away received\nI0808 11:48:27.608862 2134 log.go:172] (0xc0006f8420) (0xc0003cd2c0) Stream removed, broadcasting: 1\nI0808 11:48:27.608887 2134 log.go:172] (0xc0006f8420) (0xc00000e000) Stream removed, broadcasting: 3\nI0808 11:48:27.608914 2134 log.go:172] (0xc0006f8420) (0xc00000e0a0) Stream removed, broadcasting: 5\n" Aug 8 11:48:27.612: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 11:48:27.612: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 8 11:48:27.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8phv ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 11:48:27.863: INFO: stderr: "I0808 11:48:27.746296 2157 log.go:172] (0xc000138840) (0xc00077e640) Create stream\nI0808 11:48:27.746351 2157 log.go:172] (0xc000138840) (0xc00077e640) Stream added, broadcasting: 1\nI0808 11:48:27.748708 2157 log.go:172] (0xc000138840) Reply frame received for 1\nI0808 11:48:27.748817 2157 log.go:172] (0xc000138840) (0xc00077e6e0) Create stream\nI0808 11:48:27.748830 2157 log.go:172] (0xc000138840) (0xc00077e6e0) Stream added, broadcasting: 3\nI0808 11:48:27.749696 2157 log.go:172] (0xc000138840) Reply frame received for 3\nI0808 11:48:27.749728 2157 log.go:172] (0xc000138840) (0xc00077e780) Create stream\nI0808 11:48:27.749743 2157 log.go:172] (0xc000138840) (0xc00077e780) Stream added, broadcasting: 5\nI0808 11:48:27.750437 2157 log.go:172] (0xc000138840) Reply frame received for 5\nI0808 11:48:27.857341 2157 log.go:172] (0xc000138840) Data frame received for 5\nI0808 11:48:27.857384 2157 log.go:172] (0xc00077e780) (5) Data frame handling\nI0808 11:48:27.857434 2157 log.go:172] (0xc000138840) Data frame received for 3\nI0808 11:48:27.857473 2157 log.go:172] (0xc00077e6e0) (3) Data frame handling\nI0808 11:48:27.857494 2157 log.go:172] (0xc00077e6e0) (3) Data frame sent\nI0808 11:48:27.857511 2157 log.go:172] (0xc000138840) Data frame received for 3\nI0808 11:48:27.857526 2157 log.go:172] (0xc00077e6e0) (3) Data frame handling\nI0808 11:48:27.859053 2157 log.go:172] (0xc000138840) Data frame received for 1\nI0808 11:48:27.859078 2157 log.go:172] (0xc00077e640) (1) Data frame handling\nI0808 11:48:27.859099 2157 log.go:172] (0xc00077e640) (1) Data frame sent\nI0808 11:48:27.859117 2157 log.go:172] (0xc000138840) (0xc00077e640) Stream removed, broadcasting: 1\nI0808 11:48:27.859286 2157 log.go:172] (0xc000138840) (0xc00077e640) Stream removed, broadcasting: 1\nI0808 11:48:27.859306 2157 log.go:172] (0xc000138840) Go away received\nI0808 11:48:27.859325 2157 log.go:172] (0xc000138840) (0xc00077e6e0) Stream removed, broadcasting: 3\nI0808 11:48:27.859377 2157 log.go:172] Streams opened: 1, map[spdy.StreamId]*spdystream.Stream{0x5:(*spdystream.Stream)(0xc00077e780)}\nI0808 11:48:27.859393 2157 log.go:172] (0xc000138840) (0xc00077e780) Stream removed, broadcasting: 5\n" Aug 8 11:48:27.863: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 11:48:27.863: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 8 11:48:27.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8phv ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 8 11:48:28.104: INFO: stderr: "I0808 11:48:27.995009 2179 log.go:172] (0xc00070c2c0) (0xc0007405a0) Create stream\nI0808 11:48:27.995073 2179 log.go:172] (0xc00070c2c0) (0xc0007405a0) Stream added, broadcasting: 1\nI0808 11:48:27.998539 2179 log.go:172] (0xc00070c2c0) Reply frame received for 1\nI0808 11:48:27.998565 2179 log.go:172] (0xc00070c2c0) (0xc00054ec80) Create stream\nI0808 11:48:27.998574 2179 log.go:172] (0xc00070c2c0) (0xc00054ec80) Stream added, broadcasting: 3\nI0808 11:48:27.999725 2179 log.go:172] (0xc00070c2c0) Reply frame received for 3\nI0808 11:48:27.999802 2179 log.go:172] (0xc00070c2c0) (0xc000740640) Create stream\nI0808 11:48:27.999829 2179 log.go:172] (0xc00070c2c0) (0xc000740640) Stream added, broadcasting: 5\nI0808 11:48:28.000943 2179 log.go:172] (0xc00070c2c0) Reply frame received for 5\nI0808 11:48:28.097209 2179 log.go:172] (0xc00070c2c0) Data frame received for 3\nI0808 11:48:28.097261 2179 log.go:172] (0xc00054ec80) (3) Data frame handling\nI0808 11:48:28.097298 2179 log.go:172] (0xc00054ec80) (3) Data frame sent\nI0808 11:48:28.097635 2179 log.go:172] (0xc00070c2c0) Data frame received for 3\nI0808 11:48:28.097667 2179 log.go:172] (0xc00054ec80) (3) Data frame handling\nI0808 11:48:28.097855 2179 log.go:172] (0xc00070c2c0) Data frame received for 5\nI0808 11:48:28.097877 2179 log.go:172] (0xc000740640) (5) Data frame handling\nI0808 11:48:28.099436 2179 log.go:172] (0xc00070c2c0) Data frame received for 1\nI0808 11:48:28.099466 2179 log.go:172] (0xc0007405a0) (1) Data frame handling\nI0808 11:48:28.099483 2179 log.go:172] (0xc0007405a0) (1) Data frame sent\nI0808 11:48:28.099499 2179 log.go:172] (0xc00070c2c0) (0xc0007405a0) Stream removed, broadcasting: 1\nI0808 11:48:28.099517 2179 log.go:172] (0xc00070c2c0) Go away received\nI0808 11:48:28.099967 2179 log.go:172] (0xc00070c2c0) (0xc0007405a0) Stream removed, broadcasting: 1\nI0808 11:48:28.100000 2179 log.go:172] (0xc00070c2c0) (0xc00054ec80) Stream removed, broadcasting: 3\nI0808 11:48:28.100019 2179 log.go:172] (0xc00070c2c0) (0xc000740640) Stream removed, broadcasting: 5\n" Aug 8 11:48:28.105: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 8 11:48:28.105: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 8 11:48:28.105: INFO: Waiting for statefulset status.replicas updated to 0 Aug 8 11:48:28.109: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Aug 8 11:48:38.151: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 8 11:48:38.151: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 8 11:48:38.151: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 8 11:48:38.164: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:38.164: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC }] Aug 8 11:48:38.164: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:38.164: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:38.164: INFO: Aug 8 11:48:38.164: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 8 11:48:39.189: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:39.189: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC }] Aug 8 11:48:39.189: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:39.189: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:39.189: INFO: Aug 8 11:48:39.189: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 8 11:48:40.194: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:40.194: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC }] Aug 8 11:48:40.194: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:40.194: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:40.194: INFO: Aug 8 11:48:40.194: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 8 11:48:41.198: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:41.198: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:47:46 +0000 UTC }] Aug 8 11:48:41.198: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:41.198: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:41.198: INFO: Aug 8 11:48:41.198: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 8 11:48:42.203: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:42.203: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:42.203: INFO: Aug 8 11:48:42.203: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 8 11:48:43.208: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:43.208: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:43.208: INFO: Aug 8 11:48:43.208: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 8 11:48:44.212: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:44.213: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:44.213: INFO: Aug 8 11:48:44.213: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 8 11:48:45.216: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:45.216: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:45.217: INFO: Aug 8 11:48:45.217: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 8 11:48:46.222: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:46.222: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:46.222: INFO: Aug 8 11:48:46.222: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 8 11:48:47.226: INFO: POD NODE PHASE GRACE CONDITIONS Aug 8 11:48:47.226: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:48:06 +0000 UTC }] Aug 8 11:48:47.226: INFO: Aug 8 11:48:47.226: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-w8phv Aug 8 11:48:48.231: INFO: Scaling statefulset ss to 0 Aug 8 11:48:48.242: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 8 11:48:48.245: INFO: Deleting all statefulset in ns e2e-tests-statefulset-w8phv Aug 8 11:48:48.247: INFO: Scaling statefulset ss to 0 Aug 8 11:48:48.256: INFO: Waiting for statefulset status.replicas updated to 0 Aug 8 11:48:48.258: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:48:48.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-w8phv" for this suite. Aug 8 11:48:54.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:48:54.365: INFO: namespace: e2e-tests-statefulset-w8phv, resource: bindings, ignored listing per whitelist Aug 8 11:48:54.393: INFO: namespace e2e-tests-statefulset-w8phv deletion completed in 6.113665144s • [SLOW TEST:68.655 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:48:54.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:48:54.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22b64e25-d96d-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-fcsq9" to be "success or failure" Aug 8 11:48:54.626: INFO: Pod "downwardapi-volume-22b64e25-d96d-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 85.648648ms Aug 8 11:48:56.630: INFO: Pod "downwardapi-volume-22b64e25-d96d-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089151824s Aug 8 11:48:58.634: INFO: Pod "downwardapi-volume-22b64e25-d96d-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093493895s Aug 8 11:49:00.639: INFO: Pod "downwardapi-volume-22b64e25-d96d-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097986048s STEP: Saw pod success Aug 8 11:49:00.639: INFO: Pod "downwardapi-volume-22b64e25-d96d-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:49:00.642: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-22b64e25-d96d-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:49:00.675: INFO: Waiting for pod downwardapi-volume-22b64e25-d96d-11ea-aaa1-0242ac11000c to disappear Aug 8 11:49:00.691: INFO: Pod downwardapi-volume-22b64e25-d96d-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:49:00.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fcsq9" for this suite. Aug 8 11:49:08.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:49:08.717: INFO: namespace: e2e-tests-projected-fcsq9, resource: bindings, ignored listing per whitelist Aug 8 11:49:08.779: INFO: namespace e2e-tests-projected-fcsq9 deletion completed in 8.084685676s • [SLOW TEST:14.385 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:49:08.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 8 11:49:08.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-4xxgp' Aug 8 11:49:11.481: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 8 11:49:11.481: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Aug 8 11:49:15.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-4xxgp' Aug 8 11:49:15.768: INFO: stderr: "" Aug 8 11:49:15.768: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:49:15.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4xxgp" for this suite. Aug 8 11:49:21.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:49:21.836: INFO: namespace: e2e-tests-kubectl-4xxgp, resource: bindings, ignored listing per whitelist Aug 8 11:49:21.926: INFO: namespace e2e-tests-kubectl-4xxgp deletion completed in 6.154296923s • [SLOW TEST:13.147 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:49:21.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2qt2g in namespace e2e-tests-proxy-qgg4r I0808 11:49:22.129179 6 runners.go:184] Created replication controller with name: proxy-service-2qt2g, namespace: e2e-tests-proxy-qgg4r, replica count: 1 I0808 11:49:23.179583 6 runners.go:184] proxy-service-2qt2g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0808 11:49:24.179837 6 runners.go:184] proxy-service-2qt2g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0808 11:49:25.180050 6 runners.go:184] proxy-service-2qt2g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0808 11:49:26.180250 6 runners.go:184] proxy-service-2qt2g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0808 11:49:27.180457 6 runners.go:184] proxy-service-2qt2g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0808 11:49:28.180704 6 runners.go:184] proxy-service-2qt2g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0808 11:49:29.181013 6 runners.go:184] proxy-service-2qt2g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0808 11:49:30.181216 6 runners.go:184] proxy-service-2qt2g Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 8 11:49:30.184: INFO: setup took 8.144399125s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 8 11:49:30.190: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qgg4r/pods/http:proxy-service-2qt2g-724hv:162/proxy/: bar (200; 6.475875ms) Aug 8 11:49:30.190: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qgg4r/pods/proxy-service-2qt2g-724hv/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-cdtbz Aug 8 11:49:44.274: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-cdtbz STEP: checking the pod's current state and verifying that restartCount is present Aug 8 11:49:44.277: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:53:45.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-cdtbz" for this suite. Aug 8 11:53:54.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:53:54.775: INFO: namespace: e2e-tests-container-probe-cdtbz, resource: bindings, ignored listing per whitelist Aug 8 11:53:54.812: INFO: namespace e2e-tests-container-probe-cdtbz deletion completed in 9.17846892s • [SLOW TEST:254.663 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:53:54.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:53:55.229: INFO: Creating deployment "test-recreate-deployment" Aug 8 11:53:55.231: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 8 11:53:55.262: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Aug 8 11:53:57.270: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 8 11:53:57.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732484435, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732484435, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732484435, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732484435, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 11:53:59.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732484435, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732484435, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732484435, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732484435, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 8 11:54:01.277: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 8 11:54:01.284: INFO: Updating deployment test-recreate-deployment Aug 8 11:54:01.284: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 8 11:54:02.006: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-c5rq4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c5rq4/deployments/test-recreate-deployment,UID:d5f22f08-d96d-11ea-b2c9-0242ac120008,ResourceVersion:5167672,Generation:2,CreationTimestamp:2020-08-08 11:53:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-08 11:54:01 +0000 UTC 2020-08-08 11:54:01 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-08 11:54:01 +0000 UTC 2020-08-08 11:53:55 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Aug 8 11:54:02.010: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-c5rq4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c5rq4/replicasets/test-recreate-deployment-589c4bfd,UID:d9aa974a-d96d-11ea-b2c9-0242ac120008,ResourceVersion:5167669,Generation:1,CreationTimestamp:2020-08-08 11:54:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d5f22f08-d96d-11ea-b2c9-0242ac120008 0xc00222825f 0xc002228270}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 8 11:54:02.010: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 8 11:54:02.010: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-c5rq4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c5rq4/replicasets/test-recreate-deployment-5bf7f65dc,UID:d5f72d86-d96d-11ea-b2c9-0242ac120008,ResourceVersion:5167661,Generation:2,CreationTimestamp:2020-08-08 11:53:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d5f22f08-d96d-11ea-b2c9-0242ac120008 0xc002228470 0xc002228471}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 8 11:54:02.164: INFO: Pod "test-recreate-deployment-589c4bfd-ld5zr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-ld5zr,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-c5rq4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-c5rq4/pods/test-recreate-deployment-589c4bfd-ld5zr,UID:d9ada470-d96d-11ea-b2c9-0242ac120008,ResourceVersion:5167668,Generation:0,CreationTimestamp:2020-08-08 11:54:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd d9aa974a-d96d-11ea-b2c9-0242ac120008 0xc0022eec5f 0xc0022eec70}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-spl4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-spl4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-spl4z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022eed60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022eed80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 11:54:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:54:02.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-c5rq4" for this suite. Aug 8 11:54:10.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:54:10.282: INFO: namespace: e2e-tests-deployment-c5rq4, resource: bindings, ignored listing per whitelist Aug 8 11:54:10.308: INFO: namespace e2e-tests-deployment-c5rq4 deletion completed in 8.138591909s • [SLOW TEST:15.496 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:54:10.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 8 11:54:18.493: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:18.509: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:20.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:20.513: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:22.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:22.514: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:24.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:24.513: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:26.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:26.512: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:28.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:28.513: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:30.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:30.513: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:32.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:32.513: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:34.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:34.535: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:36.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:36.625: INFO: Pod pod-with-poststart-exec-hook still exists Aug 8 11:54:38.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 8 11:54:38.513: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:54:38.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-fck94" for this suite. Aug 8 11:55:02.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:55:02.583: INFO: namespace: e2e-tests-container-lifecycle-hook-fck94, resource: bindings, ignored listing per whitelist Aug 8 11:55:02.634: INFO: namespace e2e-tests-container-lifecycle-hook-fck94 deletion completed in 24.118128939s • [SLOW TEST:52.326 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:55:02.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 11:55:02.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Aug 8 11:55:02.815: INFO: stderr: "" Aug 8 11:55:02.815: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Aug 8 11:55:02.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cp5m8' Aug 8 11:55:03.084: INFO: stderr: "" Aug 8 11:55:03.084: INFO: stdout: "replicationcontroller/redis-master created\n" Aug 8 11:55:03.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cp5m8' Aug 8 11:55:03.410: INFO: stderr: "" Aug 8 11:55:03.410: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Aug 8 11:55:04.414: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:55:04.414: INFO: Found 0 / 1 Aug 8 11:55:05.530: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:55:05.530: INFO: Found 0 / 1 Aug 8 11:55:06.414: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:55:06.415: INFO: Found 0 / 1 Aug 8 11:55:07.415: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:55:07.415: INFO: Found 0 / 1 Aug 8 11:55:08.414: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:55:08.414: INFO: Found 1 / 1 Aug 8 11:55:08.414: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 8 11:55:08.417: INFO: Selector matched 1 pods for map[app:redis] Aug 8 11:55:08.417: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 8 11:55:08.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-r69m4 --namespace=e2e-tests-kubectl-cp5m8' Aug 8 11:55:08.539: INFO: stderr: "" Aug 8 11:55:08.539: INFO: stdout: "Name: redis-master-r69m4\nNamespace: e2e-tests-kubectl-cp5m8\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.18.0.4\nStart Time: Sat, 08 Aug 2020 11:55:03 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.116\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://50d7af1b23229c33fa516ba063f9180774035bf273a8524b56076b8987d1ccbe\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 08 Aug 2020 11:55:06 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-6nklf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-6nklf:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-6nklf\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned e2e-tests-kubectl-cp5m8/redis-master-r69m4 to hunter-worker\n Normal Pulled 4s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker Created container\n Normal Started 2s kubelet, hunter-worker Started container\n" Aug 8 11:55:08.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-cp5m8' Aug 8 11:55:08.679: INFO: stderr: "" Aug 8 11:55:08.679: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-cp5m8\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-r69m4\n" Aug 8 11:55:08.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-cp5m8' Aug 8 11:55:08.796: INFO: stderr: "" Aug 8 11:55:08.796: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-cp5m8\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.110.82.233\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.116:6379\nSession Affinity: None\nEvents: \n" Aug 8 11:55:08.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Aug 8 11:55:08.924: INFO: stderr: "" Aug 8 11:55:08.925: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:22:18 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 08 Aug 2020 11:55:05 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 08 Aug 2020 11:55:05 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 08 Aug 2020 11:55:05 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 08 Aug 2020 11:55:05 +0000 Fri, 10 Jul 2020 10:23:08 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.8\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 86b921187bcd42a69301f53c2d21b8f0\n System UUID: dbd65bbc-7a27-4b36-b69e-be53f27cba5c\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-54ff9cd656-46fs4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 29d\n kube-system coredns-54ff9cd656-gzt7d 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 29d\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29d\n kube-system kindnet-r4bfs 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 29d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 29d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 29d\n kube-system kube-proxy-4jv56 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 29d\n local-path-storage local-path-provisioner-674595c7-jw5rw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Aug 8 11:55:08.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-cp5m8' Aug 8 11:55:09.046: INFO: stderr: "" Aug 8 11:55:09.046: INFO: stdout: "Name: e2e-tests-kubectl-cp5m8\nLabels: e2e-framework=kubectl\n e2e-run=7e255c50-d964-11ea-aaa1-0242ac11000c\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:55:09.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cp5m8" for this suite. Aug 8 11:55:31.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:55:31.106: INFO: namespace: e2e-tests-kubectl-cp5m8, resource: bindings, ignored listing per whitelist Aug 8 11:55:31.114: INFO: namespace e2e-tests-kubectl-cp5m8 deletion completed in 22.065181686s • [SLOW TEST:28.479 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:55:31.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 8 11:55:31.280: INFO: Waiting up to 5m0s for pod "downward-api-0f2aa0b9-d96e-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-w9j78" to be "success or failure" Aug 8 11:55:31.301: INFO: Pod "downward-api-0f2aa0b9-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.077527ms Aug 8 11:55:33.387: INFO: Pod "downward-api-0f2aa0b9-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107118265s Aug 8 11:55:35.403: INFO: Pod "downward-api-0f2aa0b9-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123355221s Aug 8 11:55:37.415: INFO: Pod "downward-api-0f2aa0b9-d96e-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135136915s STEP: Saw pod success Aug 8 11:55:37.415: INFO: Pod "downward-api-0f2aa0b9-d96e-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:55:37.418: INFO: Trying to get logs from node hunter-worker2 pod downward-api-0f2aa0b9-d96e-11ea-aaa1-0242ac11000c container dapi-container: STEP: delete the pod Aug 8 11:55:37.455: INFO: Waiting for pod downward-api-0f2aa0b9-d96e-11ea-aaa1-0242ac11000c to disappear Aug 8 11:55:37.469: INFO: Pod downward-api-0f2aa0b9-d96e-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:55:37.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-w9j78" for this suite. Aug 8 11:55:43.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:55:43.543: INFO: namespace: e2e-tests-downward-api-w9j78, resource: bindings, ignored listing per whitelist Aug 8 11:55:43.575: INFO: namespace e2e-tests-downward-api-w9j78 deletion completed in 6.081234331s • [SLOW TEST:12.461 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:55:43.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-54lbw/secret-test-16caa822-d96e-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 11:55:44.102: INFO: Waiting up to 5m0s for pod "pod-configmaps-16cfa54a-d96e-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-secrets-54lbw" to be "success or failure" Aug 8 11:55:44.285: INFO: Pod "pod-configmaps-16cfa54a-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 182.60984ms Aug 8 11:55:46.402: INFO: Pod "pod-configmaps-16cfa54a-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299870463s Aug 8 11:55:48.406: INFO: Pod "pod-configmaps-16cfa54a-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303428s Aug 8 11:55:50.470: INFO: Pod "pod-configmaps-16cfa54a-d96e-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.367637335s STEP: Saw pod success Aug 8 11:55:50.470: INFO: Pod "pod-configmaps-16cfa54a-d96e-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:55:50.473: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-16cfa54a-d96e-11ea-aaa1-0242ac11000c container env-test: STEP: delete the pod Aug 8 11:55:50.506: INFO: Waiting for pod pod-configmaps-16cfa54a-d96e-11ea-aaa1-0242ac11000c to disappear Aug 8 11:55:50.559: INFO: Pod pod-configmaps-16cfa54a-d96e-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:55:50.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-54lbw" for this suite. Aug 8 11:55:56.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:55:56.784: INFO: namespace: e2e-tests-secrets-54lbw, resource: bindings, ignored listing per whitelist Aug 8 11:55:56.853: INFO: namespace e2e-tests-secrets-54lbw deletion completed in 6.290943243s • [SLOW TEST:13.278 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:55:56.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 8 11:55:56.937: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 8 11:55:56.972: INFO: Waiting for terminating namespaces to be deleted... Aug 8 11:55:56.975: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 8 11:55:56.980: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 8 11:55:56.980: INFO: Container kube-proxy ready: true, restart count 0 Aug 8 11:55:56.980: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 8 11:55:56.980: INFO: Container kindnet-cni ready: true, restart count 0 Aug 8 11:55:56.980: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 8 11:55:56.986: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 8 11:55:56.986: INFO: Container kube-proxy ready: true, restart count 0 Aug 8 11:55:56.986: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 8 11:55:56.986: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-20f6b228-d96e-11ea-aaa1-0242ac11000c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-20f6b228-d96e-11ea-aaa1-0242ac11000c off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-20f6b228-d96e-11ea-aaa1-0242ac11000c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:56:05.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-gw842" for this suite. Aug 8 11:56:15.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:56:15.484: INFO: namespace: e2e-tests-sched-pred-gw842, resource: bindings, ignored listing per whitelist Aug 8 11:56:15.571: INFO: namespace e2e-tests-sched-pred-gw842 deletion completed in 10.112870829s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.717 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:56:15.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-29a5aa57-d96e-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:56:15.741: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29ae97b1-d96e-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-wsbfv" to be "success or failure" Aug 8 11:56:15.744: INFO: Pod "pod-projected-configmaps-29ae97b1-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657541ms Aug 8 11:56:17.757: INFO: Pod "pod-projected-configmaps-29ae97b1-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015863132s Aug 8 11:56:19.761: INFO: Pod "pod-projected-configmaps-29ae97b1-d96e-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019443358s STEP: Saw pod success Aug 8 11:56:19.761: INFO: Pod "pod-projected-configmaps-29ae97b1-d96e-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:56:19.764: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-29ae97b1-d96e-11ea-aaa1-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 8 11:56:20.192: INFO: Waiting for pod pod-projected-configmaps-29ae97b1-d96e-11ea-aaa1-0242ac11000c to disappear Aug 8 11:56:20.205: INFO: Pod pod-projected-configmaps-29ae97b1-d96e-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:56:20.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wsbfv" for this suite. Aug 8 11:56:26.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:56:26.299: INFO: namespace: e2e-tests-projected-wsbfv, resource: bindings, ignored listing per whitelist Aug 8 11:56:26.303: INFO: namespace e2e-tests-projected-wsbfv deletion completed in 6.094563395s • [SLOW TEST:10.732 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:56:26.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:56:26.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-300ebd9a-d96e-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-8ktc8" to be "success or failure" Aug 8 11:56:26.512: INFO: Pod "downwardapi-volume-300ebd9a-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.813563ms Aug 8 11:56:28.516: INFO: Pod "downwardapi-volume-300ebd9a-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040703296s Aug 8 11:56:30.520: INFO: Pod "downwardapi-volume-300ebd9a-d96e-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044625475s STEP: Saw pod success Aug 8 11:56:30.520: INFO: Pod "downwardapi-volume-300ebd9a-d96e-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:56:30.523: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-300ebd9a-d96e-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:56:30.592: INFO: Waiting for pod downwardapi-volume-300ebd9a-d96e-11ea-aaa1-0242ac11000c to disappear Aug 8 11:56:30.601: INFO: Pod downwardapi-volume-300ebd9a-d96e-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:56:30.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8ktc8" for this suite. Aug 8 11:56:36.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:56:36.629: INFO: namespace: e2e-tests-downward-api-8ktc8, resource: bindings, ignored listing per whitelist Aug 8 11:56:36.745: INFO: namespace e2e-tests-downward-api-8ktc8 deletion completed in 6.140166895s • [SLOW TEST:10.441 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:56:36.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3648c01d-d96e-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 11:56:36.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-364d8b74-d96e-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-configmap-pkvr5" to be "success or failure" Aug 8 11:56:36.914: INFO: Pod "pod-configmaps-364d8b74-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.95429ms Aug 8 11:56:38.918: INFO: Pod "pod-configmaps-364d8b74-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022433036s Aug 8 11:56:40.921: INFO: Pod "pod-configmaps-364d8b74-d96e-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.025259889s Aug 8 11:56:42.925: INFO: Pod "pod-configmaps-364d8b74-d96e-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029920208s STEP: Saw pod success Aug 8 11:56:42.925: INFO: Pod "pod-configmaps-364d8b74-d96e-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:56:42.928: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-364d8b74-d96e-11ea-aaa1-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 8 11:56:42.944: INFO: Waiting for pod pod-configmaps-364d8b74-d96e-11ea-aaa1-0242ac11000c to disappear Aug 8 11:56:42.949: INFO: Pod pod-configmaps-364d8b74-d96e-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:56:42.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pkvr5" for this suite. Aug 8 11:56:49.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:56:49.062: INFO: namespace: e2e-tests-configmap-pkvr5, resource: bindings, ignored listing per whitelist Aug 8 11:56:49.074: INFO: namespace e2e-tests-configmap-pkvr5 deletion completed in 6.123231946s • [SLOW TEST:12.329 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:56:49.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:56:49.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d9d1c59-d96e-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-d6n42" to be "success or failure" Aug 8 11:56:49.207: INFO: Pod "downwardapi-volume-3d9d1c59-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.868266ms Aug 8 11:56:51.211: INFO: Pod "downwardapi-volume-3d9d1c59-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024209467s Aug 8 11:56:53.215: INFO: Pod "downwardapi-volume-3d9d1c59-d96e-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.028069547s Aug 8 11:56:55.219: INFO: Pod "downwardapi-volume-3d9d1c59-d96e-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032304837s STEP: Saw pod success Aug 8 11:56:55.219: INFO: Pod "downwardapi-volume-3d9d1c59-d96e-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:56:55.223: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-3d9d1c59-d96e-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:56:55.252: INFO: Waiting for pod downwardapi-volume-3d9d1c59-d96e-11ea-aaa1-0242ac11000c to disappear Aug 8 11:56:55.267: INFO: Pod downwardapi-volume-3d9d1c59-d96e-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:56:55.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d6n42" for this suite. Aug 8 11:57:01.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:57:01.331: INFO: namespace: e2e-tests-projected-d6n42, resource: bindings, ignored listing per whitelist Aug 8 11:57:01.351: INFO: namespace e2e-tests-projected-d6n42 deletion completed in 6.081171755s • [SLOW TEST:12.276 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:57:01.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-44f41047-d96e-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 11:57:01.490: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44f61126-d96e-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-wv59g" to be "success or failure" Aug 8 11:57:01.495: INFO: Pod "pod-projected-secrets-44f61126-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.004397ms Aug 8 11:57:03.500: INFO: Pod "pod-projected-secrets-44f61126-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009304801s Aug 8 11:57:05.504: INFO: Pod "pod-projected-secrets-44f61126-d96e-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.013703258s Aug 8 11:57:07.508: INFO: Pod "pod-projected-secrets-44f61126-d96e-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017880489s STEP: Saw pod success Aug 8 11:57:07.508: INFO: Pod "pod-projected-secrets-44f61126-d96e-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:57:07.511: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-44f61126-d96e-11ea-aaa1-0242ac11000c container projected-secret-volume-test: STEP: delete the pod Aug 8 11:57:07.547: INFO: Waiting for pod pod-projected-secrets-44f61126-d96e-11ea-aaa1-0242ac11000c to disappear Aug 8 11:57:07.560: INFO: Pod pod-projected-secrets-44f61126-d96e-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:57:07.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wv59g" for this suite. Aug 8 11:57:13.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:57:13.605: INFO: namespace: e2e-tests-projected-wv59g, resource: bindings, ignored listing per whitelist Aug 8 11:57:13.654: INFO: namespace e2e-tests-projected-wv59g deletion completed in 6.089558096s • [SLOW TEST:12.303 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:57:13.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Aug 8 11:57:17.866: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:57:43.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-jx7kl" for this suite. Aug 8 11:57:49.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:57:49.355: INFO: namespace: e2e-tests-namespaces-jx7kl, resource: bindings, ignored listing per whitelist Aug 8 11:57:49.376: INFO: namespace e2e-tests-namespaces-jx7kl deletion completed in 6.106287262s STEP: Destroying namespace "e2e-tests-nsdeletetest-5d42t" for this suite. Aug 8 11:57:49.378: INFO: Namespace e2e-tests-nsdeletetest-5d42t was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-5bfc4" for this suite. Aug 8 11:57:55.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:57:55.477: INFO: namespace: e2e-tests-nsdeletetest-5bfc4, resource: bindings, ignored listing per whitelist Aug 8 11:57:55.498: INFO: namespace e2e-tests-nsdeletetest-5bfc4 deletion completed in 6.120390581s • [SLOW TEST:41.845 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:57:55.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 8 11:58:05.723: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 8 11:58:05.729: INFO: Pod pod-with-poststart-http-hook still exists Aug 8 11:58:07.729: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 8 11:58:07.733: INFO: Pod pod-with-poststart-http-hook still exists Aug 8 11:58:09.729: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 8 11:58:09.734: INFO: Pod pod-with-poststart-http-hook still exists Aug 8 11:58:11.729: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 8 11:58:11.733: INFO: Pod pod-with-poststart-http-hook still exists Aug 8 11:58:13.729: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 8 11:58:13.733: INFO: Pod pod-with-poststart-http-hook still exists Aug 8 11:58:15.729: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 8 11:58:15.734: INFO: Pod pod-with-poststart-http-hook still exists Aug 8 11:58:17.729: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 8 11:58:17.734: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:58:17.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-nk74x" for this suite. Aug 8 11:58:39.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:58:39.875: INFO: namespace: e2e-tests-container-lifecycle-hook-nk74x, resource: bindings, ignored listing per whitelist Aug 8 11:58:39.877: INFO: namespace e2e-tests-container-lifecycle-hook-nk74x deletion completed in 22.138180186s • [SLOW TEST:44.378 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:58:39.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-zcrwl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zcrwl to expose endpoints map[] Aug 8 11:58:40.886: INFO: Get endpoints failed (347.241525ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Aug 8 11:58:41.891: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zcrwl exposes endpoints map[] (1.35183029s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-zcrwl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zcrwl to expose endpoints map[pod1:[100]] Aug 8 11:58:47.468: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.571476646s elapsed, will retry) Aug 8 11:58:48.745: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zcrwl exposes endpoints map[pod1:[100]] (6.848518674s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-zcrwl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zcrwl to expose endpoints map[pod1:[100] pod2:[101]] Aug 8 11:58:53.212: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zcrwl exposes endpoints map[pod1:[100] pod2:[101]] (4.432559857s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-zcrwl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zcrwl to expose endpoints map[pod2:[101]] Aug 8 11:58:54.249: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zcrwl exposes endpoints map[pod2:[101]] (1.033542693s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-zcrwl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zcrwl to expose endpoints map[] Aug 8 11:58:55.269: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zcrwl exposes endpoints map[] (1.015713067s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:58:55.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-zcrwl" for this suite. Aug 8 11:59:17.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:59:17.387: INFO: namespace: e2e-tests-services-zcrwl, resource: bindings, ignored listing per whitelist Aug 8 11:59:17.389: INFO: namespace e2e-tests-services-zcrwl deletion completed in 22.089187815s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:37.512 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:59:17.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 11:59:17.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-962e1417-d96e-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-vxl56" to be "success or failure" Aug 8 11:59:17.778: INFO: Pod "downwardapi-volume-962e1417-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.038041ms Aug 8 11:59:19.782: INFO: Pod "downwardapi-volume-962e1417-d96e-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030746282s Aug 8 11:59:21.786: INFO: Pod "downwardapi-volume-962e1417-d96e-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034511136s STEP: Saw pod success Aug 8 11:59:21.786: INFO: Pod "downwardapi-volume-962e1417-d96e-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 11:59:21.789: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-962e1417-d96e-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 11:59:21.811: INFO: Waiting for pod downwardapi-volume-962e1417-d96e-11ea-aaa1-0242ac11000c to disappear Aug 8 11:59:21.828: INFO: Pod downwardapi-volume-962e1417-d96e-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 11:59:21.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vxl56" for this suite. Aug 8 11:59:27.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 11:59:27.865: INFO: namespace: e2e-tests-projected-vxl56, resource: bindings, ignored listing per whitelist Aug 8 11:59:27.917: INFO: namespace e2e-tests-projected-vxl56 deletion completed in 6.085904391s • [SLOW TEST:10.528 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 11:59:27.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jtvk6 Aug 8 11:59:32.065: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jtvk6 STEP: checking the pod's current state and verifying that restartCount is present Aug 8 11:59:32.067: INFO: Initial restart count of pod liveness-http is 0 Aug 8 11:59:46.097: INFO: Restart count of pod e2e-tests-container-probe-jtvk6/liveness-http is now 1 (14.029932137s elapsed) Aug 8 12:00:04.135: INFO: Restart count of pod e2e-tests-container-probe-jtvk6/liveness-http is now 2 (32.067436094s elapsed) Aug 8 12:00:24.177: INFO: Restart count of pod e2e-tests-container-probe-jtvk6/liveness-http is now 3 (52.109195462s elapsed) Aug 8 12:00:46.222: INFO: Restart count of pod e2e-tests-container-probe-jtvk6/liveness-http is now 4 (1m14.154545681s elapsed) Aug 8 12:01:48.450: INFO: Restart count of pod e2e-tests-container-probe-jtvk6/liveness-http is now 5 (2m16.382249378s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:01:48.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jtvk6" for this suite. Aug 8 12:01:54.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:01:54.561: INFO: namespace: e2e-tests-container-probe-jtvk6, resource: bindings, ignored listing per whitelist Aug 8 12:01:54.612: INFO: namespace e2e-tests-container-probe-jtvk6 deletion completed in 6.08495878s • [SLOW TEST:146.694 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:01:54.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 12:01:54.789: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 8 12:01:54.795: INFO: Number of nodes with available pods: 0 Aug 8 12:01:54.795: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 8 12:01:54.880: INFO: Number of nodes with available pods: 0 Aug 8 12:01:54.880: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:01:55.884: INFO: Number of nodes with available pods: 0 Aug 8 12:01:55.884: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:01:56.884: INFO: Number of nodes with available pods: 0 Aug 8 12:01:56.884: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:01:57.906: INFO: Number of nodes with available pods: 0 Aug 8 12:01:57.906: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:01:58.885: INFO: Number of nodes with available pods: 1 Aug 8 12:01:58.885: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 8 12:01:58.915: INFO: Number of nodes with available pods: 1 Aug 8 12:01:58.915: INFO: Number of running nodes: 0, number of available pods: 1 Aug 8 12:01:59.930: INFO: Number of nodes with available pods: 0 Aug 8 12:01:59.930: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 8 12:01:59.944: INFO: Number of nodes with available pods: 0 Aug 8 12:01:59.944: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:00.960: INFO: Number of nodes with available pods: 0 Aug 8 12:02:00.960: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:01.948: INFO: Number of nodes with available pods: 0 Aug 8 12:02:01.948: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:02.948: INFO: Number of nodes with available pods: 0 Aug 8 12:02:02.948: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:03.948: INFO: Number of nodes with available pods: 0 Aug 8 12:02:03.948: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:04.948: INFO: Number of nodes with available pods: 0 Aug 8 12:02:04.949: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:05.948: INFO: Number of nodes with available pods: 0 Aug 8 12:02:05.948: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:06.949: INFO: Number of nodes with available pods: 0 Aug 8 12:02:06.949: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:08.440: INFO: Number of nodes with available pods: 0 Aug 8 12:02:08.440: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:08.949: INFO: Number of nodes with available pods: 0 Aug 8 12:02:08.949: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:09.996: INFO: Number of nodes with available pods: 0 Aug 8 12:02:09.997: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:11.091: INFO: Number of nodes with available pods: 0 Aug 8 12:02:11.091: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:11.948: INFO: Number of nodes with available pods: 0 Aug 8 12:02:11.948: INFO: Node hunter-worker is running more than one daemon pod Aug 8 12:02:12.948: INFO: Number of nodes with available pods: 1 Aug 8 12:02:12.948: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-c6h2j, will wait for the garbage collector to delete the pods Aug 8 12:02:13.014: INFO: Deleting DaemonSet.extensions daemon-set took: 6.333196ms Aug 8 12:02:13.114: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.196138ms Aug 8 12:02:27.617: INFO: Number of nodes with available pods: 0 Aug 8 12:02:27.617: INFO: Number of running nodes: 0, number of available pods: 0 Aug 8 12:02:27.620: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-c6h2j/daemonsets","resourceVersion":"5169238"},"items":null} Aug 8 12:02:27.622: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-c6h2j/pods","resourceVersion":"5169238"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:02:27.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-c6h2j" for this suite. Aug 8 12:02:33.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:02:33.709: INFO: namespace: e2e-tests-daemonsets-c6h2j, resource: bindings, ignored listing per whitelist Aug 8 12:02:33.762: INFO: namespace e2e-tests-daemonsets-c6h2j deletion completed in 6.102528807s • [SLOW TEST:39.150 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:02:33.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0b131490-d96f-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 12:02:33.898: INFO: Waiting up to 5m0s for pod "pod-secrets-0b16310e-d96f-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-secrets-vr6g9" to be "success or failure" Aug 8 12:02:33.908: INFO: Pod "pod-secrets-0b16310e-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.659898ms Aug 8 12:02:35.913: INFO: Pod "pod-secrets-0b16310e-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014279775s Aug 8 12:02:37.917: INFO: Pod "pod-secrets-0b16310e-d96f-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018703912s STEP: Saw pod success Aug 8 12:02:37.917: INFO: Pod "pod-secrets-0b16310e-d96f-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 12:02:37.920: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-0b16310e-d96f-11ea-aaa1-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 8 12:02:37.968: INFO: Waiting for pod pod-secrets-0b16310e-d96f-11ea-aaa1-0242ac11000c to disappear Aug 8 12:02:37.980: INFO: Pod pod-secrets-0b16310e-d96f-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:02:37.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vr6g9" for this suite. Aug 8 12:02:43.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:02:44.043: INFO: namespace: e2e-tests-secrets-vr6g9, resource: bindings, ignored listing per whitelist Aug 8 12:02:44.073: INFO: namespace e2e-tests-secrets-vr6g9 deletion completed in 6.08809389s • [SLOW TEST:10.310 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:02:44.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 8 12:02:44.196: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:02:51.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-8hlwd" for this suite. Aug 8 12:02:57.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:02:57.438: INFO: namespace: e2e-tests-init-container-8hlwd, resource: bindings, ignored listing per whitelist Aug 8 12:02:57.453: INFO: namespace e2e-tests-init-container-8hlwd deletion completed in 6.107953527s • [SLOW TEST:13.381 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:02:57.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-sbtv STEP: Creating a pod to test atomic-volume-subpath Aug 8 12:02:57.583: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-sbtv" in namespace "e2e-tests-subpath-dklpb" to be "success or failure" Aug 8 12:02:57.614: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Pending", Reason="", readiness=false. Elapsed: 30.839339ms Aug 8 12:02:59.617: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034085873s Aug 8 12:03:01.739: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155868705s Aug 8 12:03:03.743: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=true. Elapsed: 6.159998239s Aug 8 12:03:05.747: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=false. Elapsed: 8.163855103s Aug 8 12:03:07.752: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=false. Elapsed: 10.168258788s Aug 8 12:03:09.756: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=false. Elapsed: 12.172191073s Aug 8 12:03:11.760: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=false. Elapsed: 14.176427734s Aug 8 12:03:13.763: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=false. Elapsed: 16.17933612s Aug 8 12:03:15.767: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=false. Elapsed: 18.183184923s Aug 8 12:03:17.782: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=false. Elapsed: 20.198154182s Aug 8 12:03:19.785: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=false. Elapsed: 22.201831791s Aug 8 12:03:21.789: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Running", Reason="", readiness=false. Elapsed: 24.205434285s Aug 8 12:03:23.793: INFO: Pod "pod-subpath-test-secret-sbtv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.209929884s STEP: Saw pod success Aug 8 12:03:23.793: INFO: Pod "pod-subpath-test-secret-sbtv" satisfied condition "success or failure" Aug 8 12:03:23.796: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-sbtv container test-container-subpath-secret-sbtv: STEP: delete the pod Aug 8 12:03:23.830: INFO: Waiting for pod pod-subpath-test-secret-sbtv to disappear Aug 8 12:03:23.837: INFO: Pod pod-subpath-test-secret-sbtv no longer exists STEP: Deleting pod pod-subpath-test-secret-sbtv Aug 8 12:03:23.837: INFO: Deleting pod "pod-subpath-test-secret-sbtv" in namespace "e2e-tests-subpath-dklpb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:03:23.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-dklpb" for this suite. Aug 8 12:03:29.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:03:29.886: INFO: namespace: e2e-tests-subpath-dklpb, resource: bindings, ignored listing per whitelist Aug 8 12:03:29.926: INFO: namespace e2e-tests-subpath-dklpb deletion completed in 6.083936651s • [SLOW TEST:32.472 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:03:29.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-2c8b4c32-d96f-11ea-aaa1-0242ac11000c STEP: Creating secret with name secret-projected-all-test-volume-2c8b4bf0-d96f-11ea-aaa1-0242ac11000c STEP: Creating a pod to test Check all projections for projected volume plugin Aug 8 12:03:30.085: INFO: Waiting up to 5m0s for pod "projected-volume-2c8b4b66-d96f-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-f2hrm" to be "success or failure" Aug 8 12:03:30.107: INFO: Pod "projected-volume-2c8b4b66-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.011727ms Aug 8 12:03:32.114: INFO: Pod "projected-volume-2c8b4b66-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028712688s Aug 8 12:03:34.119: INFO: Pod "projected-volume-2c8b4b66-d96f-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033879092s STEP: Saw pod success Aug 8 12:03:34.119: INFO: Pod "projected-volume-2c8b4b66-d96f-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 12:03:34.123: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-2c8b4b66-d96f-11ea-aaa1-0242ac11000c container projected-all-volume-test: STEP: delete the pod Aug 8 12:03:34.181: INFO: Waiting for pod projected-volume-2c8b4b66-d96f-11ea-aaa1-0242ac11000c to disappear Aug 8 12:03:34.197: INFO: Pod projected-volume-2c8b4b66-d96f-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:03:34.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f2hrm" for this suite. Aug 8 12:03:40.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:03:40.317: INFO: namespace: e2e-tests-projected-f2hrm, resource: bindings, ignored listing per whitelist Aug 8 12:03:40.339: INFO: namespace e2e-tests-projected-f2hrm deletion completed in 6.138780697s • [SLOW TEST:10.414 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:03:40.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-32c60102-d96f-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume secrets Aug 8 12:03:40.490: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-32c6ac53-d96f-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-hkmdt" to be "success or failure" Aug 8 12:03:40.500: INFO: Pod "pod-projected-secrets-32c6ac53-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066585ms Aug 8 12:03:42.551: INFO: Pod "pod-projected-secrets-32c6ac53-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061660972s Aug 8 12:03:44.556: INFO: Pod "pod-projected-secrets-32c6ac53-d96f-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066132519s STEP: Saw pod success Aug 8 12:03:44.556: INFO: Pod "pod-projected-secrets-32c6ac53-d96f-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 12:03:44.559: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-32c6ac53-d96f-11ea-aaa1-0242ac11000c container projected-secret-volume-test: STEP: delete the pod Aug 8 12:03:44.689: INFO: Waiting for pod pod-projected-secrets-32c6ac53-d96f-11ea-aaa1-0242ac11000c to disappear Aug 8 12:03:44.709: INFO: Pod pod-projected-secrets-32c6ac53-d96f-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:03:44.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hkmdt" for this suite. Aug 8 12:03:50.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:03:50.738: INFO: namespace: e2e-tests-projected-hkmdt, resource: bindings, ignored listing per whitelist Aug 8 12:03:50.795: INFO: namespace e2e-tests-projected-hkmdt deletion completed in 6.082542018s • [SLOW TEST:10.455 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:03:50.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 12:03:50.917: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38ffe5ed-d96f-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-w9lbc" to be "success or failure" Aug 8 12:03:50.933: INFO: Pod "downwardapi-volume-38ffe5ed-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.12329ms Aug 8 12:03:52.937: INFO: Pod "downwardapi-volume-38ffe5ed-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019988001s Aug 8 12:03:54.942: INFO: Pod "downwardapi-volume-38ffe5ed-d96f-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024501837s STEP: Saw pod success Aug 8 12:03:54.942: INFO: Pod "downwardapi-volume-38ffe5ed-d96f-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 12:03:54.945: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-38ffe5ed-d96f-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 12:03:54.971: INFO: Waiting for pod downwardapi-volume-38ffe5ed-d96f-11ea-aaa1-0242ac11000c to disappear Aug 8 12:03:55.021: INFO: Pod downwardapi-volume-38ffe5ed-d96f-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:03:55.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-w9lbc" for this suite. Aug 8 12:04:01.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:04:01.109: INFO: namespace: e2e-tests-downward-api-w9lbc, resource: bindings, ignored listing per whitelist Aug 8 12:04:01.118: INFO: namespace e2e-tests-downward-api-w9lbc deletion completed in 6.093292044s • [SLOW TEST:10.323 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:04:01.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3f277c10-d96f-11ea-aaa1-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 8 12:04:01.257: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f29509e-d96f-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-configmap-r7s7k" to be "success or failure" Aug 8 12:04:01.272: INFO: Pod "pod-configmaps-3f29509e-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.153407ms Aug 8 12:04:03.275: INFO: Pod "pod-configmaps-3f29509e-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018734717s Aug 8 12:04:05.297: INFO: Pod "pod-configmaps-3f29509e-d96f-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040747105s STEP: Saw pod success Aug 8 12:04:05.297: INFO: Pod "pod-configmaps-3f29509e-d96f-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 12:04:05.300: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3f29509e-d96f-11ea-aaa1-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 8 12:04:05.321: INFO: Waiting for pod pod-configmaps-3f29509e-d96f-11ea-aaa1-0242ac11000c to disappear Aug 8 12:04:05.344: INFO: Pod pod-configmaps-3f29509e-d96f-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:04:05.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-r7s7k" for this suite. Aug 8 12:04:11.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:04:11.473: INFO: namespace: e2e-tests-configmap-r7s7k, resource: bindings, ignored listing per whitelist Aug 8 12:04:11.508: INFO: namespace e2e-tests-configmap-r7s7k deletion completed in 6.160233686s • [SLOW TEST:10.390 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:04:11.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Aug 8 12:04:11.647: INFO: Waiting up to 5m0s for pod "client-containers-4553d60c-d96f-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-containers-8jgcl" to be "success or failure" Aug 8 12:04:11.681: INFO: Pod "client-containers-4553d60c-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.592323ms Aug 8 12:04:13.684: INFO: Pod "client-containers-4553d60c-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036775621s Aug 8 12:04:15.688: INFO: Pod "client-containers-4553d60c-d96f-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040826155s STEP: Saw pod success Aug 8 12:04:15.688: INFO: Pod "client-containers-4553d60c-d96f-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 12:04:15.691: INFO: Trying to get logs from node hunter-worker pod client-containers-4553d60c-d96f-11ea-aaa1-0242ac11000c container test-container: STEP: delete the pod Aug 8 12:04:15.826: INFO: Waiting for pod client-containers-4553d60c-d96f-11ea-aaa1-0242ac11000c to disappear Aug 8 12:04:15.844: INFO: Pod client-containers-4553d60c-d96f-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:04:15.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8jgcl" for this suite. Aug 8 12:04:21.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:04:21.881: INFO: namespace: e2e-tests-containers-8jgcl, resource: bindings, ignored listing per whitelist Aug 8 12:04:21.939: INFO: namespace e2e-tests-containers-8jgcl deletion completed in 6.092501226s • [SLOW TEST:10.431 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:04:21.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Aug 8 12:04:22.647: INFO: created pod pod-service-account-defaultsa Aug 8 12:04:22.647: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 8 12:04:22.654: INFO: created pod pod-service-account-mountsa Aug 8 12:04:22.654: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 8 12:04:22.699: INFO: created pod pod-service-account-nomountsa Aug 8 12:04:22.699: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 8 12:04:22.703: INFO: created pod pod-service-account-defaultsa-mountspec Aug 8 12:04:22.703: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 8 12:04:22.721: INFO: created pod pod-service-account-mountsa-mountspec Aug 8 12:04:22.721: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 8 12:04:22.758: INFO: created pod pod-service-account-nomountsa-mountspec Aug 8 12:04:22.758: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 8 12:04:22.776: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 8 12:04:22.776: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 8 12:04:22.837: INFO: created pod pod-service-account-mountsa-nomountspec Aug 8 12:04:22.837: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 8 12:04:22.856: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 8 12:04:22.856: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:04:22.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-sfk9l" for this suite. Aug 8 12:04:50.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:04:51.042: INFO: namespace: e2e-tests-svcaccounts-sfk9l, resource: bindings, ignored listing per whitelist Aug 8 12:04:51.062: INFO: namespace e2e-tests-svcaccounts-sfk9l deletion completed in 28.175012935s • [SLOW TEST:29.122 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:04:51.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-nfwf STEP: Creating a pod to test atomic-volume-subpath Aug 8 12:04:51.235: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nfwf" in namespace "e2e-tests-subpath-lb9ld" to be "success or failure" Aug 8 12:04:51.238: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809056ms Aug 8 12:04:53.242: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007115087s Aug 8 12:04:55.246: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011095138s Aug 8 12:04:57.250: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015058054s Aug 8 12:04:59.254: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Running", Reason="", readiness=false. Elapsed: 8.018975745s Aug 8 12:05:01.258: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Running", Reason="", readiness=false. Elapsed: 10.022982896s Aug 8 12:05:03.262: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Running", Reason="", readiness=false. Elapsed: 12.027315419s Aug 8 12:05:05.267: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Running", Reason="", readiness=false. Elapsed: 14.031589018s Aug 8 12:05:07.271: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Running", Reason="", readiness=false. Elapsed: 16.03603238s Aug 8 12:05:09.276: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Running", Reason="", readiness=false. Elapsed: 18.040543536s Aug 8 12:05:11.279: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Running", Reason="", readiness=false. Elapsed: 20.044438482s Aug 8 12:05:13.283: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Running", Reason="", readiness=false. Elapsed: 22.048052057s Aug 8 12:05:15.287: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Running", Reason="", readiness=false. Elapsed: 24.052054671s Aug 8 12:05:17.291: INFO: Pod "pod-subpath-test-configmap-nfwf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.055985238s STEP: Saw pod success Aug 8 12:05:17.291: INFO: Pod "pod-subpath-test-configmap-nfwf" satisfied condition "success or failure" Aug 8 12:05:17.293: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-nfwf container test-container-subpath-configmap-nfwf: STEP: delete the pod Aug 8 12:05:17.344: INFO: Waiting for pod pod-subpath-test-configmap-nfwf to disappear Aug 8 12:05:17.356: INFO: Pod pod-subpath-test-configmap-nfwf no longer exists STEP: Deleting pod pod-subpath-test-configmap-nfwf Aug 8 12:05:17.357: INFO: Deleting pod "pod-subpath-test-configmap-nfwf" in namespace "e2e-tests-subpath-lb9ld" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:05:17.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lb9ld" for this suite. Aug 8 12:05:23.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:05:23.451: INFO: namespace: e2e-tests-subpath-lb9ld, resource: bindings, ignored listing per whitelist Aug 8 12:05:23.491: INFO: namespace e2e-tests-subpath-lb9ld deletion completed in 6.077700688s • [SLOW TEST:32.429 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:05:23.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 12:05:23.593: INFO: Creating deployment "nginx-deployment" Aug 8 12:05:23.604: INFO: Waiting for observed generation 1 Aug 8 12:05:25.614: INFO: Waiting for all required pods to come up Aug 8 12:05:25.618: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 8 12:05:35.627: INFO: Waiting for deployment "nginx-deployment" to complete Aug 8 12:05:35.632: INFO: Updating deployment "nginx-deployment" with a non-existent image Aug 8 12:05:35.638: INFO: Updating deployment nginx-deployment Aug 8 12:05:35.638: INFO: Waiting for observed generation 2 Aug 8 12:05:37.679: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 8 12:05:37.682: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 8 12:05:37.685: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 8 12:05:37.693: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 8 12:05:37.693: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 8 12:05:37.695: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 8 12:05:37.700: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Aug 8 12:05:37.700: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Aug 8 12:05:37.705: INFO: Updating deployment nginx-deployment Aug 8 12:05:37.705: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Aug 8 12:05:38.353: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 8 12:05:38.394: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 8 12:05:40.964: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bcmqt/deployments/nginx-deployment,UID:703e3c2a-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170230,Generation:3,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-08-08 12:05:38 +0000 UTC 2020-08-08 12:05:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-08 12:05:38 +0000 UTC 2020-08-08 12:05:23 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Aug 8 12:05:40.968: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bcmqt/replicasets/nginx-deployment-5c98f8fb5,UID:776c28af-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170225,Generation:3,CreationTimestamp:2020-08-08 12:05:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 703e3c2a-d96f-11ea-b2c9-0242ac120008 0xc0028ff767 0xc0028ff768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 8 12:05:40.968: INFO: All old ReplicaSets of Deployment "nginx-deployment": Aug 8 12:05:40.968: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bcmqt/replicasets/nginx-deployment-85ddf47c5d,UID:7040e813-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170219,Generation:3,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 703e3c2a-d96f-11ea-b2c9-0242ac120008 0xc0028ff837 0xc0028ff838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Aug 8 12:05:40.976: INFO: Pod "nginx-deployment-5c98f8fb5-28lw2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-28lw2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-28lw2,UID:77896284-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170154,Generation:0,CreationTimestamp:2020-08-08 12:05:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff0ba7 0xc001ff0ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff0c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff0c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-08 12:05:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.976: INFO: Pod "nginx-deployment-5c98f8fb5-2cvgq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2cvgq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-2cvgq,UID:79317484-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170209,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff0d00 0xc001ff0d01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff0eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff0ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.977: INFO: Pod "nginx-deployment-5c98f8fb5-8dc9h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8dc9h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-8dc9h,UID:778e52da-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170158,Generation:0,CreationTimestamp:2020-08-08 12:05:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff0f47 0xc001ff0f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff0fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff0fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-08 12:05:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.977: INFO: Pod "nginx-deployment-5c98f8fb5-9zg6x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9zg6x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-9zg6x,UID:7771d8bc-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170126,Generation:0,CreationTimestamp:2020-08-08 12:05:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff1220 0xc001ff1221}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff12a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff12c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-08 12:05:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.977: INFO: Pod "nginx-deployment-5c98f8fb5-drp92" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-drp92,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-drp92,UID:79313e41-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170208,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff1380 0xc001ff1381}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff1400} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff1420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.977: INFO: Pod "nginx-deployment-5c98f8fb5-gsbf2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gsbf2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-gsbf2,UID:7935c6aa-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170222,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff1497 0xc001ff1498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff1510} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff1530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.977: INFO: Pod "nginx-deployment-5c98f8fb5-lpgq7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lpgq7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-lpgq7,UID:7931766d-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170206,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff15a7 0xc001ff15a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff1620} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff1640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.978: INFO: Pod "nginx-deployment-5c98f8fb5-mkgxt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mkgxt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-mkgxt,UID:79318090-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170207,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff16b7 0xc001ff16b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff1730} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff1750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.978: INFO: Pod "nginx-deployment-5c98f8fb5-qb494" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qb494,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-qb494,UID:777258bf-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170152,Generation:0,CreationTimestamp:2020-08-08 12:05:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff17c7 0xc001ff17c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff1840} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff1860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-08 12:05:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.978: INFO: Pod "nginx-deployment-5c98f8fb5-snx8g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-snx8g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-snx8g,UID:77724bb1-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170133,Generation:0,CreationTimestamp:2020-08-08 12:05:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff1920 0xc001ff1921}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff19a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff19c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-08 12:05:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.978: INFO: Pod "nginx-deployment-5c98f8fb5-t42ns" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-t42ns,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-t42ns,UID:790a67fb-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170226,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff1a80 0xc001ff1a81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff1b00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff1b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-08 12:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.978: INFO: Pod "nginx-deployment-5c98f8fb5-x92wl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x92wl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-x92wl,UID:7910da2e-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170243,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff1be0 0xc001ff1be1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff1c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff1c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-08 12:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.978: INFO: Pod "nginx-deployment-5c98f8fb5-xkfn5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xkfn5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-5c98f8fb5-xkfn5,UID:79110e24-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170235,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 776c28af-d96f-11ea-b2c9-0242ac120008 0xc001ff1d40 0xc001ff1d41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff1dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff1de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-08 12:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.979: INFO: Pod "nginx-deployment-85ddf47c5d-5n4ql" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5n4ql,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-5n4ql,UID:78dacf21-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170199,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc001ff1ea0 0xc001ff1ea1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff1f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff1f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-08 12:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.979: INFO: Pod "nginx-deployment-85ddf47c5d-5zwc5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5zwc5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-5zwc5,UID:79318345-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170214,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc001ff1fe7 0xc001ff1fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202e070} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202e090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.979: INFO: Pod "nginx-deployment-85ddf47c5d-726cw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-726cw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-726cw,UID:7910fa98-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170276,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202e117 0xc00202e118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202e190} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202e1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-08 12:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.979: INFO: Pod "nginx-deployment-85ddf47c5d-8mqhw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8mqhw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-8mqhw,UID:79319629-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170211,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202e267 0xc00202e268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202e2f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202e310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.979: INFO: Pod "nginx-deployment-85ddf47c5d-9krbq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9krbq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-9krbq,UID:790a84f0-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170233,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202e387 0xc00202e388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202e400} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202e420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-08 12:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.979: INFO: Pod "nginx-deployment-85ddf47c5d-dlwlj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dlwlj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-dlwlj,UID:704fadac-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170095,Generation:0,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202e4d7 0xc00202e4d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202e550} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202e570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.56,StartTime:2020-08-08 12:05:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-08 12:05:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://52f533188b2c88df1dff4bd9416c8f271787472289a5cce1b3246e76f1250e64}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.979: INFO: Pod "nginx-deployment-85ddf47c5d-g4cdk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g4cdk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-g4cdk,UID:79319291-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170213,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202e637 0xc00202e638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202e6c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202e6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.980: INFO: Pod "nginx-deployment-85ddf47c5d-gcpzd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gcpzd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-gcpzd,UID:704fc9bb-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170053,Generation:0,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202e767 0xc00202e768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202e7e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202e800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.55,StartTime:2020-08-08 12:05:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-08 12:05:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://00105f4c31ece60ada818c567da5cb33d36dd4388e5438e6f2d9e23e1db62dd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.980: INFO: Pod "nginx-deployment-85ddf47c5d-gdgcz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gdgcz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-gdgcz,UID:70517aab-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170087,Generation:0,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202e8e7 0xc00202e8e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202e960} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202e980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.135,StartTime:2020-08-08 12:05:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-08 12:05:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://376bff80f947cfec481b44ab3bdac91c6b04bd09f290364226adb0ac63d8691c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.980: INFO: Pod "nginx-deployment-85ddf47c5d-gxtw8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gxtw8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-gxtw8,UID:79111b6f-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170189,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202ea57 0xc00202ea58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202ead0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202eaf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.980: INFO: Pod "nginx-deployment-85ddf47c5d-jx4t2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jx4t2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-jx4t2,UID:791118da-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170200,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202eb67 0xc00202eb68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202ebe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202ec00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.980: INFO: Pod "nginx-deployment-85ddf47c5d-lkcvf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lkcvf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-lkcvf,UID:704a23a0-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170040,Generation:0,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202ec77 0xc00202ec78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202ecf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202ed10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.54,StartTime:2020-08-08 12:05:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-08 12:05:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://194aedcaf822f14e78bb36f6464a1a568f424bf0ac92eaa60ae9fb0727566339}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.980: INFO: Pod "nginx-deployment-85ddf47c5d-ln9mn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ln9mn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-ln9mn,UID:7911139c-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170277,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202edd7 0xc00202edd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202ee50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202ee70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-08 12:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.980: INFO: Pod "nginx-deployment-85ddf47c5d-ph8rr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ph8rr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-ph8rr,UID:7931912c-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170212,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202ef27 0xc00202ef28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202efa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202efc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.980: INFO: Pod "nginx-deployment-85ddf47c5d-ppk7h" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ppk7h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-ppk7h,UID:704bbe15-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170092,Generation:0,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202f037 0xc00202f038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202f0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202f0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.132,StartTime:2020-08-08 12:05:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-08 12:05:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fce96ff26be0cdeeac8dd78233877079c3cec048716a4c1076a1858e64e24cb4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.981: INFO: Pod "nginx-deployment-85ddf47c5d-qxd9h" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qxd9h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-qxd9h,UID:704bc598-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170061,Generation:0,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202f197 0xc00202f198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202f210} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202f230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.131,StartTime:2020-08-08 12:05:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-08 12:05:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://48394f89abd69f96c15cc0b5548bef3faf268901fbd42f0002cefc8cf693cd48}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.981: INFO: Pod "nginx-deployment-85ddf47c5d-shgwv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-shgwv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-shgwv,UID:79318782-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170210,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202f2f7 0xc00202f2f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202f370} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202f390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.981: INFO: Pod "nginx-deployment-85ddf47c5d-v95r8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v95r8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-v95r8,UID:705187b4-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170101,Generation:0,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202f407 0xc00202f408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202f480} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202f4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.58,StartTime:2020-08-08 12:05:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-08 12:05:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://473ae896a18279a2d486bcea4f6ad2f4711ef2c2525cb71a50ec595842f7d715}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.981: INFO: Pod "nginx-deployment-85ddf47c5d-wdxfg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wdxfg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-wdxfg,UID:704fdb2e-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170083,Generation:0,CreationTimestamp:2020-08-08 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202f567 0xc00202f568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202f5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202f600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.133,StartTime:2020-08-08 12:05:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-08 12:05:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://271b568601e2393e66cfaf4d0869750019a8d6f8f07a9cfa21365afda83651cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 8 12:05:40.981: INFO: Pod "nginx-deployment-85ddf47c5d-z7rfm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z7rfm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bcmqt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bcmqt/pods/nginx-deployment-85ddf47c5d-z7rfm,UID:790a9ba3-d96f-11ea-b2c9-0242ac120008,ResourceVersion:5170220,Generation:0,CreationTimestamp:2020-08-08 12:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7040e813-d96f-11ea-b2c9-0242ac120008 0xc00202f6c7 0xc00202f6c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-545z7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-545z7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-545z7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00202f740} {node.kubernetes.io/unreachable Exists NoExecute 0xc00202f760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-08 12:05:38 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-08 12:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:05:40.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-bcmqt" for this suite. Aug 8 12:05:59.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:05:59.651: INFO: namespace: e2e-tests-deployment-bcmqt, resource: bindings, ignored listing per whitelist Aug 8 12:05:59.693: INFO: namespace e2e-tests-deployment-bcmqt deletion completed in 18.66846349s • [SLOW TEST:36.202 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:05:59.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 8 12:06:00.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85fa2846-d96f-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-rh6h2" to be "success or failure" Aug 8 12:06:00.174: INFO: Pod "downwardapi-volume-85fa2846-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 110.671104ms Aug 8 12:06:02.178: INFO: Pod "downwardapi-volume-85fa2846-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114807974s Aug 8 12:06:04.182: INFO: Pod "downwardapi-volume-85fa2846-d96f-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119174156s STEP: Saw pod success Aug 8 12:06:04.182: INFO: Pod "downwardapi-volume-85fa2846-d96f-11ea-aaa1-0242ac11000c" satisfied condition "success or failure" Aug 8 12:06:04.185: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-85fa2846-d96f-11ea-aaa1-0242ac11000c container client-container: STEP: delete the pod Aug 8 12:06:04.213: INFO: Waiting for pod downwardapi-volume-85fa2846-d96f-11ea-aaa1-0242ac11000c to disappear Aug 8 12:06:04.263: INFO: Pod downwardapi-volume-85fa2846-d96f-11ea-aaa1-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:06:04.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rh6h2" for this suite. Aug 8 12:06:10.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:06:10.300: INFO: namespace: e2e-tests-projected-rh6h2, resource: bindings, ignored listing per whitelist Aug 8 12:06:10.356: INFO: namespace e2e-tests-projected-rh6h2 deletion completed in 6.089549485s • [SLOW TEST:10.663 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:06:10.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 8 12:06:10.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pbxmd" for this suite. Aug 8 12:06:32.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 8 12:06:32.592: INFO: namespace: e2e-tests-pods-pbxmd, resource: bindings, ignored listing per whitelist Aug 8 12:06:32.618: INFO: namespace e2e-tests-pods-pbxmd deletion completed in 22.109822572s • [SLOW TEST:22.261 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 8 12:06:32.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 8 12:06:32.709: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0808 12:07:19.121466       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  8 12:07:19.121: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:07:19.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xnvr2" for this suite.
Aug  8 12:07:27.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:07:27.376: INFO: namespace: e2e-tests-gc-xnvr2, resource: bindings, ignored listing per whitelist
Aug  8 12:07:27.384: INFO: namespace e2e-tests-gc-xnvr2 deletion completed in 8.258495684s

• [SLOW TEST:48.499 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:07:27.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Aug  8 12:07:27.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug  8 12:07:28.320: INFO: stderr: ""
Aug  8 12:07:28.320: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:07:28.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vf9ff" for this suite.
Aug  8 12:07:34.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:07:34.445: INFO: namespace: e2e-tests-kubectl-vf9ff, resource: bindings, ignored listing per whitelist
Aug  8 12:07:34.484: INFO: namespace e2e-tests-kubectl-vf9ff deletion completed in 6.152357355s

• [SLOW TEST:7.099 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:07:34.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0808 12:07:46.558228       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  8 12:07:46.558: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:07:46.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-lzj5z" for this suite.
Aug  8 12:07:54.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:07:54.955: INFO: namespace: e2e-tests-gc-lzj5z, resource: bindings, ignored listing per whitelist
Aug  8 12:07:54.967: INFO: namespace e2e-tests-gc-lzj5z deletion completed in 8.405047887s

• [SLOW TEST:20.483 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:07:54.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-caadaad8-d96f-11ea-aaa1-0242ac11000c
STEP: Creating a pod to test consume secrets
Aug  8 12:07:55.331: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-caae3027-d96f-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-72n4m" to be "success or failure"
Aug  8 12:07:55.661: INFO: Pod "pod-projected-secrets-caae3027-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 329.896387ms
Aug  8 12:07:57.665: INFO: Pod "pod-projected-secrets-caae3027-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334026759s
Aug  8 12:07:59.669: INFO: Pod "pod-projected-secrets-caae3027-d96f-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338321321s
Aug  8 12:08:01.674: INFO: Pod "pod-projected-secrets-caae3027-d96f-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.342992039s
STEP: Saw pod success
Aug  8 12:08:01.674: INFO: Pod "pod-projected-secrets-caae3027-d96f-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:08:01.677: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-caae3027-d96f-11ea-aaa1-0242ac11000c container projected-secret-volume-test: 
STEP: delete the pod
Aug  8 12:08:01.750: INFO: Waiting for pod pod-projected-secrets-caae3027-d96f-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:08:01.753: INFO: Pod pod-projected-secrets-caae3027-d96f-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:08:01.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-72n4m" for this suite.
Aug  8 12:08:07.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:08:07.834: INFO: namespace: e2e-tests-projected-72n4m, resource: bindings, ignored listing per whitelist
Aug  8 12:08:07.855: INFO: namespace e2e-tests-projected-72n4m deletion completed in 6.098262745s

• [SLOW TEST:12.888 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:08:07.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug  8 12:08:07.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-pwlfh'
Aug  8 12:08:10.369: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug  8 12:08:10.369: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Aug  8 12:08:10.378: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Aug  8 12:08:10.433: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug  8 12:08:10.465: INFO: scanned /root for discovery docs: 
Aug  8 12:08:10.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-pwlfh'
Aug  8 12:08:26.321: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug  8 12:08:26.321: INFO: stdout: "Created e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876\nScaling up e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Aug  8 12:08:26.321: INFO: stdout: "Created e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876\nScaling up e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Aug  8 12:08:26.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pwlfh'
Aug  8 12:08:26.438: INFO: stderr: ""
Aug  8 12:08:26.438: INFO: stdout: "e2e-test-nginx-rc-7tgkh e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876-zm8vn "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Aug  8 12:08:31.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pwlfh'
Aug  8 12:08:31.545: INFO: stderr: ""
Aug  8 12:08:31.545: INFO: stdout: "e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876-zm8vn "
Aug  8 12:08:31.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876-zm8vn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pwlfh'
Aug  8 12:08:31.639: INFO: stderr: ""
Aug  8 12:08:31.639: INFO: stdout: "true"
Aug  8 12:08:31.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876-zm8vn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pwlfh'
Aug  8 12:08:31.743: INFO: stderr: ""
Aug  8 12:08:31.743: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Aug  8 12:08:31.743: INFO: e2e-test-nginx-rc-ff225c7d06f11750ae149f003b45f876-zm8vn is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug  8 12:08:31.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pwlfh'
Aug  8 12:08:31.851: INFO: stderr: ""
Aug  8 12:08:31.851: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:08:31.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pwlfh" for this suite.
Aug  8 12:08:53.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:08:53.953: INFO: namespace: e2e-tests-kubectl-pwlfh, resource: bindings, ignored listing per whitelist
Aug  8 12:08:53.976: INFO: namespace e2e-tests-kubectl-pwlfh deletion completed in 22.117914629s

• [SLOW TEST:46.121 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:08:53.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug  8 12:08:54.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:08:58.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5kk29" for this suite.
Aug  8 12:09:36.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:09:36.328: INFO: namespace: e2e-tests-pods-5kk29, resource: bindings, ignored listing per whitelist
Aug  8 12:09:36.364: INFO: namespace e2e-tests-pods-5kk29 deletion completed in 38.108301875s

• [SLOW TEST:42.387 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:09:36.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug  8 12:09:36.453: INFO: Waiting up to 5m0s for pod "pod-06f4a6b2-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-tnlcr" to be "success or failure"
Aug  8 12:09:36.469: INFO: Pod "pod-06f4a6b2-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.702401ms
Aug  8 12:09:38.473: INFO: Pod "pod-06f4a6b2-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019904369s
Aug  8 12:09:40.488: INFO: Pod "pod-06f4a6b2-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034868416s
STEP: Saw pod success
Aug  8 12:09:40.488: INFO: Pod "pod-06f4a6b2-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:09:40.491: INFO: Trying to get logs from node hunter-worker pod pod-06f4a6b2-d970-11ea-aaa1-0242ac11000c container test-container: 
STEP: delete the pod
Aug  8 12:09:40.527: INFO: Waiting for pod pod-06f4a6b2-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:09:40.542: INFO: Pod pod-06f4a6b2-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:09:40.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tnlcr" for this suite.
Aug  8 12:09:46.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:09:46.583: INFO: namespace: e2e-tests-emptydir-tnlcr, resource: bindings, ignored listing per whitelist
Aug  8 12:09:46.675: INFO: namespace e2e-tests-emptydir-tnlcr deletion completed in 6.129940563s

• [SLOW TEST:10.311 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:09:46.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Aug  8 12:09:46.802: INFO: Waiting up to 5m0s for pod "var-expansion-0d1b94db-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-var-expansion-xm4ld" to be "success or failure"
Aug  8 12:09:46.812: INFO: Pod "var-expansion-0d1b94db-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.573101ms
Aug  8 12:09:48.860: INFO: Pod "var-expansion-0d1b94db-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057166415s
Aug  8 12:09:50.864: INFO: Pod "var-expansion-0d1b94db-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061734762s
STEP: Saw pod success
Aug  8 12:09:50.864: INFO: Pod "var-expansion-0d1b94db-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:09:50.867: INFO: Trying to get logs from node hunter-worker pod var-expansion-0d1b94db-d970-11ea-aaa1-0242ac11000c container dapi-container: 
STEP: delete the pod
Aug  8 12:09:50.891: INFO: Waiting for pod var-expansion-0d1b94db-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:09:50.895: INFO: Pod var-expansion-0d1b94db-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:09:50.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-xm4ld" for this suite.
Aug  8 12:09:56.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:09:56.969: INFO: namespace: e2e-tests-var-expansion-xm4ld, resource: bindings, ignored listing per whitelist
Aug  8 12:09:56.984: INFO: namespace e2e-tests-var-expansion-xm4ld deletion completed in 6.086109517s

• [SLOW TEST:10.309 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:09:56.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Aug  8 12:09:57.167: INFO: PodSpec: initContainers in spec.initContainers
Aug  8 12:10:48.220: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-134e2569-d970-11ea-aaa1-0242ac11000c", GenerateName:"", Namespace:"e2e-tests-init-container-pw5fg", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-pw5fg/pods/pod-init-134e2569-d970-11ea-aaa1-0242ac11000c", UID:"134ed983-d970-11ea-b2c9-0242ac120008", ResourceVersion:"5171718", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732485397, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"167455265"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7tfsp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0016dc400), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7tfsp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7tfsp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7tfsp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ed9868), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00125f2c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ed9910)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ed99a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000ed99a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000ed99ac)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732485397, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732485397, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732485397, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732485397, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.165", StartTime:(*v1.Time)(0xc000ba1b40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000ba1b80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000268770)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://2ae0a800b4b9a525d83c4303553d74c472941c1bdf894c932246c2379b728d2a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ba1ba0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ba1b60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:10:48.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-pw5fg" for this suite.
Aug  8 12:11:10.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:11:10.287: INFO: namespace: e2e-tests-init-container-pw5fg, resource: bindings, ignored listing per whitelist
Aug  8 12:11:10.324: INFO: namespace e2e-tests-init-container-pw5fg deletion completed in 22.097239552s

• [SLOW TEST:73.340 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:11:10.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug  8 12:11:10.427: INFO: Waiting up to 5m0s for pod "downward-api-3ef6b0ec-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-qz4h5" to be "success or failure"
Aug  8 12:11:10.431: INFO: Pod "downward-api-3ef6b0ec-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.599469ms
Aug  8 12:11:12.435: INFO: Pod "downward-api-3ef6b0ec-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007909579s
Aug  8 12:11:14.439: INFO: Pod "downward-api-3ef6b0ec-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012169835s
STEP: Saw pod success
Aug  8 12:11:14.439: INFO: Pod "downward-api-3ef6b0ec-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:11:14.443: INFO: Trying to get logs from node hunter-worker2 pod downward-api-3ef6b0ec-d970-11ea-aaa1-0242ac11000c container dapi-container: 
STEP: delete the pod
Aug  8 12:11:14.486: INFO: Waiting for pod downward-api-3ef6b0ec-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:11:14.496: INFO: Pod downward-api-3ef6b0ec-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:11:14.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qz4h5" for this suite.
Aug  8 12:11:20.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:11:20.560: INFO: namespace: e2e-tests-downward-api-qz4h5, resource: bindings, ignored listing per whitelist
Aug  8 12:11:20.654: INFO: namespace e2e-tests-downward-api-qz4h5 deletion completed in 6.150756643s

• [SLOW TEST:10.329 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:11:20.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-gwhwz/configmap-test-452adea1-d970-11ea-aaa1-0242ac11000c
STEP: Creating a pod to test consume configMaps
Aug  8 12:11:20.852: INFO: Waiting up to 5m0s for pod "pod-configmaps-452cfae9-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-configmap-gwhwz" to be "success or failure"
Aug  8 12:11:20.886: INFO: Pod "pod-configmaps-452cfae9-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.485737ms
Aug  8 12:11:23.053: INFO: Pod "pod-configmaps-452cfae9-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201257337s
Aug  8 12:11:25.057: INFO: Pod "pod-configmaps-452cfae9-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205484014s
STEP: Saw pod success
Aug  8 12:11:25.057: INFO: Pod "pod-configmaps-452cfae9-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:11:25.061: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-452cfae9-d970-11ea-aaa1-0242ac11000c container env-test: 
STEP: delete the pod
Aug  8 12:11:25.097: INFO: Waiting for pod pod-configmaps-452cfae9-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:11:25.102: INFO: Pod pod-configmaps-452cfae9-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:11:25.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gwhwz" for this suite.
Aug  8 12:11:31.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:11:31.178: INFO: namespace: e2e-tests-configmap-gwhwz, resource: bindings, ignored listing per whitelist
Aug  8 12:11:31.208: INFO: namespace e2e-tests-configmap-gwhwz deletion completed in 6.099978759s

• [SLOW TEST:10.554 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:11:31.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug  8 12:11:31.307: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b69d707-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-xm6tv" to be "success or failure"
Aug  8 12:11:31.322: INFO: Pod "downwardapi-volume-4b69d707-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.179107ms
Aug  8 12:11:33.515: INFO: Pod "downwardapi-volume-4b69d707-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207170632s
Aug  8 12:11:35.519: INFO: Pod "downwardapi-volume-4b69d707-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.211533764s
STEP: Saw pod success
Aug  8 12:11:35.519: INFO: Pod "downwardapi-volume-4b69d707-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:11:35.522: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4b69d707-d970-11ea-aaa1-0242ac11000c container client-container: 
STEP: delete the pod
Aug  8 12:11:35.574: INFO: Waiting for pod downwardapi-volume-4b69d707-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:11:35.587: INFO: Pod downwardapi-volume-4b69d707-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:11:35.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xm6tv" for this suite.
Aug  8 12:11:41.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:11:41.663: INFO: namespace: e2e-tests-projected-xm6tv, resource: bindings, ignored listing per whitelist
Aug  8 12:11:41.709: INFO: namespace e2e-tests-projected-xm6tv deletion completed in 6.118828687s

• [SLOW TEST:10.501 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:11:41.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug  8 12:11:41.823: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:11:42.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-k4n5r" for this suite.
Aug  8 12:11:48.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:11:49.000: INFO: namespace: e2e-tests-custom-resource-definition-k4n5r, resource: bindings, ignored listing per whitelist
Aug  8 12:11:49.018: INFO: namespace e2e-tests-custom-resource-definition-k4n5r deletion completed in 6.084718481s

• [SLOW TEST:7.308 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:11:49.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:12:49.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-7xshb" for this suite.
Aug  8 12:13:11.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:13:11.278: INFO: namespace: e2e-tests-container-probe-7xshb, resource: bindings, ignored listing per whitelist
Aug  8 12:13:11.310: INFO: namespace e2e-tests-container-probe-7xshb deletion completed in 22.111235948s

• [SLOW TEST:82.292 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:13:11.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Aug  8 12:13:11.392: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:13:11.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9lkqq" for this suite.
Aug  8 12:13:17.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:13:17.568: INFO: namespace: e2e-tests-kubectl-9lkqq, resource: bindings, ignored listing per whitelist
Aug  8 12:13:17.617: INFO: namespace e2e-tests-kubectl-9lkqq deletion completed in 6.134042515s

• [SLOW TEST:6.307 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:13:17.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug  8 12:13:17.712: INFO: Waiting up to 5m0s for pod "pod-8ad4a4b2-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-scwxm" to be "success or failure"
Aug  8 12:13:17.716: INFO: Pod "pod-8ad4a4b2-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.753505ms
Aug  8 12:13:19.815: INFO: Pod "pod-8ad4a4b2-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103165317s
Aug  8 12:13:21.819: INFO: Pod "pod-8ad4a4b2-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106839188s
STEP: Saw pod success
Aug  8 12:13:21.819: INFO: Pod "pod-8ad4a4b2-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:13:21.821: INFO: Trying to get logs from node hunter-worker2 pod pod-8ad4a4b2-d970-11ea-aaa1-0242ac11000c container test-container: 
STEP: delete the pod
Aug  8 12:13:21.901: INFO: Waiting for pod pod-8ad4a4b2-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:13:21.925: INFO: Pod pod-8ad4a4b2-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:13:21.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-scwxm" for this suite.
Aug  8 12:13:27.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:13:28.007: INFO: namespace: e2e-tests-emptydir-scwxm, resource: bindings, ignored listing per whitelist
Aug  8 12:13:28.031: INFO: namespace e2e-tests-emptydir-scwxm deletion completed in 6.101903575s

• [SLOW TEST:10.414 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:13:28.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Aug  8 12:13:28.646: INFO: Waiting up to 5m0s for pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-9gnfx" in namespace "e2e-tests-svcaccounts-d94sx" to be "success or failure"
Aug  8 12:13:28.655: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-9gnfx": Phase="Pending", Reason="", readiness=false. Elapsed: 9.095049ms
Aug  8 12:13:30.659: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-9gnfx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013339527s
Aug  8 12:13:32.663: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-9gnfx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017144253s
Aug  8 12:13:34.668: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-9gnfx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021539959s
STEP: Saw pod success
Aug  8 12:13:34.668: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-9gnfx" satisfied condition "success or failure"
Aug  8 12:13:34.671: INFO: Trying to get logs from node hunter-worker pod pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-9gnfx container token-test: 
STEP: delete the pod
Aug  8 12:13:34.883: INFO: Waiting for pod pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-9gnfx to disappear
Aug  8 12:13:34.894: INFO: Pod pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-9gnfx no longer exists
STEP: Creating a pod to test consume service account root CA
Aug  8 12:13:34.898: INFO: Waiting up to 5m0s for pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm" in namespace "e2e-tests-svcaccounts-d94sx" to be "success or failure"
Aug  8 12:13:34.901: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.642936ms
Aug  8 12:13:36.977: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079236717s
Aug  8 12:13:38.981: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08320422s
Aug  8 12:13:40.985: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm": Phase="Running", Reason="", readiness=false. Elapsed: 6.087434194s
Aug  8 12:13:42.990: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091613418s
STEP: Saw pod success
Aug  8 12:13:42.990: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm" satisfied condition "success or failure"
Aug  8 12:13:42.993: INFO: Trying to get logs from node hunter-worker pod pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm container root-ca-test: 
STEP: delete the pod
Aug  8 12:13:43.042: INFO: Waiting for pod pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm to disappear
Aug  8 12:13:43.046: INFO: Pod pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-w7pzm no longer exists
STEP: Creating a pod to test consume service account namespace
Aug  8 12:13:43.091: INFO: Waiting up to 5m0s for pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk" in namespace "e2e-tests-svcaccounts-d94sx" to be "success or failure"
Aug  8 12:13:43.124: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk": Phase="Pending", Reason="", readiness=false. Elapsed: 33.122362ms
Aug  8 12:13:45.128: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036989563s
Aug  8 12:13:47.228: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137070253s
Aug  8 12:13:49.235: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk": Phase="Running", Reason="", readiness=false. Elapsed: 6.144339737s
Aug  8 12:13:51.239: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.14818096s
STEP: Saw pod success
Aug  8 12:13:51.239: INFO: Pod "pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk" satisfied condition "success or failure"
Aug  8 12:13:51.242: INFO: Trying to get logs from node hunter-worker pod pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk container namespace-test: 
STEP: delete the pod
Aug  8 12:13:51.267: INFO: Waiting for pod pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk to disappear
Aug  8 12:13:51.284: INFO: Pod pod-service-account-915a4b70-d970-11ea-aaa1-0242ac11000c-nwbtk no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:13:51.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-d94sx" for this suite.
Aug  8 12:13:57.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:13:57.389: INFO: namespace: e2e-tests-svcaccounts-d94sx, resource: bindings, ignored listing per whitelist
Aug  8 12:13:57.421: INFO: namespace e2e-tests-svcaccounts-d94sx deletion completed in 6.134256487s

• [SLOW TEST:29.390 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:13:57.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-xnk4
STEP: Creating a pod to test atomic-volume-subpath
Aug  8 12:13:57.604: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xnk4" in namespace "e2e-tests-subpath-c9qzt" to be "success or failure"
Aug  8 12:13:57.608: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.477523ms
Aug  8 12:13:59.630: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025715127s
Aug  8 12:14:01.786: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181933253s
Aug  8 12:14:03.791: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=true. Elapsed: 6.186232697s
Aug  8 12:14:05.795: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=false. Elapsed: 8.190179873s
Aug  8 12:14:07.799: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=false. Elapsed: 10.194597135s
Aug  8 12:14:09.803: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=false. Elapsed: 12.199131348s
Aug  8 12:14:11.807: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=false. Elapsed: 14.203048407s
Aug  8 12:14:13.812: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=false. Elapsed: 16.207624624s
Aug  8 12:14:15.816: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=false. Elapsed: 18.211859569s
Aug  8 12:14:17.821: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=false. Elapsed: 20.216852824s
Aug  8 12:14:19.825: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=false. Elapsed: 22.221012552s
Aug  8 12:14:21.830: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Running", Reason="", readiness=false. Elapsed: 24.225465892s
Aug  8 12:14:23.834: INFO: Pod "pod-subpath-test-downwardapi-xnk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.229212839s
STEP: Saw pod success
Aug  8 12:14:23.834: INFO: Pod "pod-subpath-test-downwardapi-xnk4" satisfied condition "success or failure"
Aug  8 12:14:23.836: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-xnk4 container test-container-subpath-downwardapi-xnk4: 
STEP: delete the pod
Aug  8 12:14:23.895: INFO: Waiting for pod pod-subpath-test-downwardapi-xnk4 to disappear
Aug  8 12:14:23.902: INFO: Pod pod-subpath-test-downwardapi-xnk4 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-xnk4
Aug  8 12:14:23.902: INFO: Deleting pod "pod-subpath-test-downwardapi-xnk4" in namespace "e2e-tests-subpath-c9qzt"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:14:23.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-c9qzt" for this suite.
Aug  8 12:14:29.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:14:29.986: INFO: namespace: e2e-tests-subpath-c9qzt, resource: bindings, ignored listing per whitelist
Aug  8 12:14:29.993: INFO: namespace e2e-tests-subpath-c9qzt deletion completed in 6.085902778s

• [SLOW TEST:32.571 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:14:29.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:14:36.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-fsld4" for this suite.
Aug  8 12:14:42.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:14:42.485: INFO: namespace: e2e-tests-namespaces-fsld4, resource: bindings, ignored listing per whitelist
Aug  8 12:14:42.509: INFO: namespace e2e-tests-namespaces-fsld4 deletion completed in 6.091021796s
STEP: Destroying namespace "e2e-tests-nsdeletetest-nznw5" for this suite.
Aug  8 12:14:42.511: INFO: Namespace e2e-tests-nsdeletetest-nznw5 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-d2jgf" for this suite.
Aug  8 12:14:48.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:14:48.551: INFO: namespace: e2e-tests-nsdeletetest-d2jgf, resource: bindings, ignored listing per whitelist
Aug  8 12:14:48.603: INFO: namespace e2e-tests-nsdeletetest-d2jgf deletion completed in 6.091727449s

• [SLOW TEST:18.610 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:14:48.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug  8 12:14:48.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:14:52.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nlz74" for this suite.
Aug  8 12:15:42.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:15:42.818: INFO: namespace: e2e-tests-pods-nlz74, resource: bindings, ignored listing per whitelist
Aug  8 12:15:42.867: INFO: namespace e2e-tests-pods-nlz74 deletion completed in 50.082483698s

• [SLOW TEST:54.264 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:15:42.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug  8 12:15:42.984: INFO: Waiting up to 5m0s for pod "pod-e16b360d-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-bmgrg" to be "success or failure"
Aug  8 12:15:42.987: INFO: Pod "pod-e16b360d-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310692ms
Aug  8 12:15:44.992: INFO: Pod "pod-e16b360d-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007531748s
Aug  8 12:15:46.996: INFO: Pod "pod-e16b360d-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011686734s
STEP: Saw pod success
Aug  8 12:15:46.996: INFO: Pod "pod-e16b360d-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:15:46.999: INFO: Trying to get logs from node hunter-worker2 pod pod-e16b360d-d970-11ea-aaa1-0242ac11000c container test-container: 
STEP: delete the pod
Aug  8 12:15:47.019: INFO: Waiting for pod pod-e16b360d-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:15:47.038: INFO: Pod pod-e16b360d-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:15:47.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bmgrg" for this suite.
Aug  8 12:15:53.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:15:53.067: INFO: namespace: e2e-tests-emptydir-bmgrg, resource: bindings, ignored listing per whitelist
Aug  8 12:15:53.124: INFO: namespace e2e-tests-emptydir-bmgrg deletion completed in 6.08212015s

• [SLOW TEST:10.257 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:15:53.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug  8 12:15:53.219: INFO: Waiting up to 5m0s for pod "pod-e7855dd3-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-fd4dl" to be "success or failure"
Aug  8 12:15:53.255: INFO: Pod "pod-e7855dd3-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.637382ms
Aug  8 12:15:55.258: INFO: Pod "pod-e7855dd3-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039138637s
Aug  8 12:15:57.261: INFO: Pod "pod-e7855dd3-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042071259s
STEP: Saw pod success
Aug  8 12:15:57.261: INFO: Pod "pod-e7855dd3-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:15:57.264: INFO: Trying to get logs from node hunter-worker pod pod-e7855dd3-d970-11ea-aaa1-0242ac11000c container test-container: 
STEP: delete the pod
Aug  8 12:15:57.328: INFO: Waiting for pod pod-e7855dd3-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:15:57.344: INFO: Pod pod-e7855dd3-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:15:57.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fd4dl" for this suite.
Aug  8 12:16:03.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:16:03.414: INFO: namespace: e2e-tests-emptydir-fd4dl, resource: bindings, ignored listing per whitelist
Aug  8 12:16:03.461: INFO: namespace e2e-tests-emptydir-fd4dl deletion completed in 6.112064462s

• [SLOW TEST:10.337 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:16:03.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Aug  8 12:16:03.597: INFO: Waiting up to 5m0s for pod "var-expansion-edb16e1a-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-var-expansion-fcmvf" to be "success or failure"
Aug  8 12:16:03.626: INFO: Pod "var-expansion-edb16e1a-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.656275ms
Aug  8 12:16:05.630: INFO: Pod "var-expansion-edb16e1a-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032953153s
Aug  8 12:16:07.635: INFO: Pod "var-expansion-edb16e1a-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037811531s
STEP: Saw pod success
Aug  8 12:16:07.635: INFO: Pod "var-expansion-edb16e1a-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:16:07.637: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-edb16e1a-d970-11ea-aaa1-0242ac11000c container dapi-container: 
STEP: delete the pod
Aug  8 12:16:07.684: INFO: Waiting for pod var-expansion-edb16e1a-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:16:07.688: INFO: Pod var-expansion-edb16e1a-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:16:07.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-fcmvf" for this suite.
Aug  8 12:16:13.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:16:13.781: INFO: namespace: e2e-tests-var-expansion-fcmvf, resource: bindings, ignored listing per whitelist
Aug  8 12:16:13.793: INFO: namespace e2e-tests-var-expansion-fcmvf deletion completed in 6.102107897s

• [SLOW TEST:10.332 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:16:13.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug  8 12:16:13.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3d682b1-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-downward-api-nj8vv" to be "success or failure"
Aug  8 12:16:13.899: INFO: Pod "downwardapi-volume-f3d682b1-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.250459ms
Aug  8 12:16:15.903: INFO: Pod "downwardapi-volume-f3d682b1-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014490346s
Aug  8 12:16:17.908: INFO: Pod "downwardapi-volume-f3d682b1-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01910255s
STEP: Saw pod success
Aug  8 12:16:17.908: INFO: Pod "downwardapi-volume-f3d682b1-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:16:17.910: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f3d682b1-d970-11ea-aaa1-0242ac11000c container client-container: 
STEP: delete the pod
Aug  8 12:16:17.984: INFO: Waiting for pod downwardapi-volume-f3d682b1-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:16:17.991: INFO: Pod downwardapi-volume-f3d682b1-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:16:17.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nj8vv" for this suite.
Aug  8 12:16:24.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:16:24.038: INFO: namespace: e2e-tests-downward-api-nj8vv, resource: bindings, ignored listing per whitelist
Aug  8 12:16:24.081: INFO: namespace e2e-tests-downward-api-nj8vv deletion completed in 6.087057608s

• [SLOW TEST:10.287 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:16:24.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug  8 12:16:24.311: INFO: Waiting up to 5m0s for pod "pod-fa05839b-d970-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-6zdjf" to be "success or failure"
Aug  8 12:16:24.320: INFO: Pod "pod-fa05839b-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.981491ms
Aug  8 12:16:26.324: INFO: Pod "pod-fa05839b-d970-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012379564s
Aug  8 12:16:28.328: INFO: Pod "pod-fa05839b-d970-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016395222s
STEP: Saw pod success
Aug  8 12:16:28.328: INFO: Pod "pod-fa05839b-d970-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:16:28.331: INFO: Trying to get logs from node hunter-worker2 pod pod-fa05839b-d970-11ea-aaa1-0242ac11000c container test-container: 
STEP: delete the pod
Aug  8 12:16:28.374: INFO: Waiting for pod pod-fa05839b-d970-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:16:28.380: INFO: Pod pod-fa05839b-d970-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:16:28.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6zdjf" for this suite.
Aug  8 12:16:34.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:16:34.407: INFO: namespace: e2e-tests-emptydir-6zdjf, resource: bindings, ignored listing per whitelist
Aug  8 12:16:34.476: INFO: namespace e2e-tests-emptydir-6zdjf deletion completed in 6.092189232s

• [SLOW TEST:10.395 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:16:34.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-876vt A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-876vt;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-876vt A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-876vt;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-876vt.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-876vt.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-876vt.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-876vt.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-876vt.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-876vt.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-876vt.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-876vt.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-876vt.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-876vt.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-876vt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.246.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.246.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.246.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.246.140_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-876vt A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-876vt;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-876vt A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-876vt;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-876vt.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-876vt.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-876vt.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-876vt.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-876vt.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-876vt.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-876vt.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-876vt.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-876vt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.246.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.246.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.246.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.246.140_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug  8 12:16:42.666: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:42.710: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:42.713: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:42.717: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:42.720: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:42.732: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:42.735: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:42.738: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:42.741: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:42.756: INFO: Lookups using e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-876vt jessie_tcp@dns-test-service.e2e-tests-dns-876vt jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc]

Aug  8 12:16:47.762: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:47.803: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:47.806: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:47.809: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:47.812: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:47.815: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:47.818: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:47.821: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:47.824: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:47.841: INFO: Lookups using e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-876vt jessie_tcp@dns-test-service.e2e-tests-dns-876vt jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc]

Aug  8 12:16:52.769: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:52.809: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:52.813: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:52.816: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:52.819: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:52.822: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:52.825: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:52.828: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:52.831: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:52.850: INFO: Lookups using e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-876vt jessie_tcp@dns-test-service.e2e-tests-dns-876vt jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc]

Aug  8 12:16:57.760: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:57.793: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:57.795: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:57.798: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:57.800: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:57.804: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:57.806: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:57.809: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:57.811: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:16:57.822: INFO: Lookups using e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-876vt jessie_tcp@dns-test-service.e2e-tests-dns-876vt jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc]

Aug  8 12:17:02.761: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:02.803: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:02.806: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:02.809: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:02.813: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:02.818: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:02.821: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:02.823: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:02.825: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:02.838: INFO: Lookups using e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-876vt jessie_tcp@dns-test-service.e2e-tests-dns-876vt jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc]

Aug  8 12:17:07.760: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:07.799: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:07.801: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:07.803: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:07.806: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:07.808: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:07.810: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:07.812: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:07.814: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc from pod e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c: the server could not find the requested resource (get pods dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c)
Aug  8 12:17:07.829: INFO: Lookups using e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-876vt jessie_tcp@dns-test-service.e2e-tests-dns-876vt jessie_udp@dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@dns-test-service.e2e-tests-dns-876vt.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-876vt.svc]

Aug  8 12:17:12.859: INFO: DNS probes using e2e-tests-dns-876vt/dns-test-0035c7e6-d971-11ea-aaa1-0242ac11000c succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:17:13.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-876vt" for this suite.
Aug  8 12:17:19.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:17:19.727: INFO: namespace: e2e-tests-dns-876vt, resource: bindings, ignored listing per whitelist
Aug  8 12:17:19.781: INFO: namespace e2e-tests-dns-876vt deletion completed in 6.092098502s

• [SLOW TEST:45.305 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:17:19.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-86z8d
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug  8 12:17:19.955: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug  8 12:17:42.085: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.175 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-86z8d PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  8 12:17:42.085: INFO: >>> kubeConfig: /root/.kube/config
I0808 12:17:42.118881       6 log.go:172] (0xc00093b8c0) (0xc0004f9cc0) Create stream
I0808 12:17:42.118918       6 log.go:172] (0xc00093b8c0) (0xc0004f9cc0) Stream added, broadcasting: 1
I0808 12:17:42.121149       6 log.go:172] (0xc00093b8c0) Reply frame received for 1
I0808 12:17:42.121202       6 log.go:172] (0xc00093b8c0) (0xc000323360) Create stream
I0808 12:17:42.121215       6 log.go:172] (0xc00093b8c0) (0xc000323360) Stream added, broadcasting: 3
I0808 12:17:42.122190       6 log.go:172] (0xc00093b8c0) Reply frame received for 3
I0808 12:17:42.122227       6 log.go:172] (0xc00093b8c0) (0xc00045a500) Create stream
I0808 12:17:42.122247       6 log.go:172] (0xc00093b8c0) (0xc00045a500) Stream added, broadcasting: 5
I0808 12:17:42.123362       6 log.go:172] (0xc00093b8c0) Reply frame received for 5
I0808 12:17:43.212405       6 log.go:172] (0xc00093b8c0) Data frame received for 3
I0808 12:17:43.212445       6 log.go:172] (0xc000323360) (3) Data frame handling
I0808 12:17:43.212478       6 log.go:172] (0xc000323360) (3) Data frame sent
I0808 12:17:43.212501       6 log.go:172] (0xc00093b8c0) Data frame received for 3
I0808 12:17:43.212520       6 log.go:172] (0xc000323360) (3) Data frame handling
I0808 12:17:43.212703       6 log.go:172] (0xc00093b8c0) Data frame received for 5
I0808 12:17:43.212930       6 log.go:172] (0xc00045a500) (5) Data frame handling
I0808 12:17:43.215266       6 log.go:172] (0xc00093b8c0) Data frame received for 1
I0808 12:17:43.215299       6 log.go:172] (0xc0004f9cc0) (1) Data frame handling
I0808 12:17:43.215344       6 log.go:172] (0xc0004f9cc0) (1) Data frame sent
I0808 12:17:43.215391       6 log.go:172] (0xc00093b8c0) (0xc0004f9cc0) Stream removed, broadcasting: 1
I0808 12:17:43.215427       6 log.go:172] (0xc00093b8c0) Go away received
I0808 12:17:43.215545       6 log.go:172] (0xc00093b8c0) (0xc0004f9cc0) Stream removed, broadcasting: 1
I0808 12:17:43.215578       6 log.go:172] (0xc00093b8c0) (0xc000323360) Stream removed, broadcasting: 3
I0808 12:17:43.215599       6 log.go:172] (0xc00093b8c0) (0xc00045a500) Stream removed, broadcasting: 5
Aug  8 12:17:43.215: INFO: Found all expected endpoints: [netserver-0]
Aug  8 12:17:43.219: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.91 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-86z8d PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  8 12:17:43.219: INFO: >>> kubeConfig: /root/.kube/config
I0808 12:17:43.252829       6 log.go:172] (0xc000015970) (0xc00188b360) Create stream
I0808 12:17:43.252875       6 log.go:172] (0xc000015970) (0xc00188b360) Stream added, broadcasting: 1
I0808 12:17:43.254893       6 log.go:172] (0xc000015970) Reply frame received for 1
I0808 12:17:43.254937       6 log.go:172] (0xc000015970) (0xc000a42460) Create stream
I0808 12:17:43.254964       6 log.go:172] (0xc000015970) (0xc000a42460) Stream added, broadcasting: 3
I0808 12:17:43.255982       6 log.go:172] (0xc000015970) Reply frame received for 3
I0808 12:17:43.256028       6 log.go:172] (0xc000015970) (0xc001e292c0) Create stream
I0808 12:17:43.256042       6 log.go:172] (0xc000015970) (0xc001e292c0) Stream added, broadcasting: 5
I0808 12:17:43.257268       6 log.go:172] (0xc000015970) Reply frame received for 5
I0808 12:17:44.347296       6 log.go:172] (0xc000015970) Data frame received for 3
I0808 12:17:44.347343       6 log.go:172] (0xc000a42460) (3) Data frame handling
I0808 12:17:44.347393       6 log.go:172] (0xc000a42460) (3) Data frame sent
I0808 12:17:44.347789       6 log.go:172] (0xc000015970) Data frame received for 5
I0808 12:17:44.347822       6 log.go:172] (0xc001e292c0) (5) Data frame handling
I0808 12:17:44.347860       6 log.go:172] (0xc000015970) Data frame received for 3
I0808 12:17:44.347909       6 log.go:172] (0xc000a42460) (3) Data frame handling
I0808 12:17:44.350110       6 log.go:172] (0xc000015970) Data frame received for 1
I0808 12:17:44.350155       6 log.go:172] (0xc00188b360) (1) Data frame handling
I0808 12:17:44.350205       6 log.go:172] (0xc00188b360) (1) Data frame sent
I0808 12:17:44.350227       6 log.go:172] (0xc000015970) (0xc00188b360) Stream removed, broadcasting: 1
I0808 12:17:44.350254       6 log.go:172] (0xc000015970) Go away received
I0808 12:17:44.350366       6 log.go:172] (0xc000015970) (0xc00188b360) Stream removed, broadcasting: 1
I0808 12:17:44.350388       6 log.go:172] (0xc000015970) (0xc000a42460) Stream removed, broadcasting: 3
I0808 12:17:44.350402       6 log.go:172] (0xc000015970) (0xc001e292c0) Stream removed, broadcasting: 5
Aug  8 12:17:44.350: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:17:44.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-86z8d" for this suite.
Aug  8 12:18:08.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:18:08.406: INFO: namespace: e2e-tests-pod-network-test-86z8d, resource: bindings, ignored listing per whitelist
Aug  8 12:18:08.459: INFO: namespace e2e-tests-pod-network-test-86z8d deletion completed in 24.100479957s

• [SLOW TEST:48.678 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:18:08.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-q4wc
STEP: Creating a pod to test atomic-volume-subpath
Aug  8 12:18:08.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-q4wc" in namespace "e2e-tests-subpath-hl56h" to be "success or failure"
Aug  8 12:18:08.628: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.787801ms
Aug  8 12:18:10.632: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060723562s
Aug  8 12:18:12.636: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064462554s
Aug  8 12:18:14.640: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068683371s
Aug  8 12:18:16.644: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Running", Reason="", readiness=false. Elapsed: 8.072990263s
Aug  8 12:18:18.649: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Running", Reason="", readiness=false. Elapsed: 10.077651134s
Aug  8 12:18:20.653: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Running", Reason="", readiness=false. Elapsed: 12.082046432s
Aug  8 12:18:22.658: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Running", Reason="", readiness=false. Elapsed: 14.086309499s
Aug  8 12:18:24.662: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Running", Reason="", readiness=false. Elapsed: 16.090462718s
Aug  8 12:18:26.666: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Running", Reason="", readiness=false. Elapsed: 18.094457224s
Aug  8 12:18:28.670: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Running", Reason="", readiness=false. Elapsed: 20.098872987s
Aug  8 12:18:30.675: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Running", Reason="", readiness=false. Elapsed: 22.103320684s
Aug  8 12:18:32.679: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Running", Reason="", readiness=false. Elapsed: 24.10742277s
Aug  8 12:18:34.683: INFO: Pod "pod-subpath-test-projected-q4wc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.111124404s
STEP: Saw pod success
Aug  8 12:18:34.683: INFO: Pod "pod-subpath-test-projected-q4wc" satisfied condition "success or failure"
Aug  8 12:18:34.685: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-q4wc container test-container-subpath-projected-q4wc: 
STEP: delete the pod
Aug  8 12:18:34.721: INFO: Waiting for pod pod-subpath-test-projected-q4wc to disappear
Aug  8 12:18:34.734: INFO: Pod pod-subpath-test-projected-q4wc no longer exists
STEP: Deleting pod pod-subpath-test-projected-q4wc
Aug  8 12:18:34.734: INFO: Deleting pod "pod-subpath-test-projected-q4wc" in namespace "e2e-tests-subpath-hl56h"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:18:34.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hl56h" for this suite.
Aug  8 12:18:40.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:18:40.776: INFO: namespace: e2e-tests-subpath-hl56h, resource: bindings, ignored listing per whitelist
Aug  8 12:18:40.829: INFO: namespace e2e-tests-subpath-hl56h deletion completed in 6.088933027s

• [SLOW TEST:32.370 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:18:40.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug  8 12:18:40.944: INFO: Waiting up to 5m0s for pod "pod-4b7d6aef-d971-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-2sxqr" to be "success or failure"
Aug  8 12:18:40.947: INFO: Pod "pod-4b7d6aef-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.637599ms
Aug  8 12:18:43.078: INFO: Pod "pod-4b7d6aef-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134659085s
Aug  8 12:18:45.082: INFO: Pod "pod-4b7d6aef-d971-11ea-aaa1-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.138677983s
Aug  8 12:18:47.086: INFO: Pod "pod-4b7d6aef-d971-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142522147s
STEP: Saw pod success
Aug  8 12:18:47.086: INFO: Pod "pod-4b7d6aef-d971-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:18:47.089: INFO: Trying to get logs from node hunter-worker pod pod-4b7d6aef-d971-11ea-aaa1-0242ac11000c container test-container: 
STEP: delete the pod
Aug  8 12:18:47.111: INFO: Waiting for pod pod-4b7d6aef-d971-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:18:47.180: INFO: Pod pod-4b7d6aef-d971-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:18:47.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2sxqr" for this suite.
Aug  8 12:18:53.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:18:53.222: INFO: namespace: e2e-tests-emptydir-2sxqr, resource: bindings, ignored listing per whitelist
Aug  8 12:18:53.320: INFO: namespace e2e-tests-emptydir-2sxqr deletion completed in 6.13535823s

• [SLOW TEST:12.490 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:18:53.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug  8 12:18:53.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-ftz7m'
Aug  8 12:18:56.236: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug  8 12:18:56.236: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Aug  8 12:18:58.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-ftz7m'
Aug  8 12:18:58.482: INFO: stderr: ""
Aug  8 12:18:58.482: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:18:58.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ftz7m" for this suite.
Aug  8 12:19:20.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:19:20.780: INFO: namespace: e2e-tests-kubectl-ftz7m, resource: bindings, ignored listing per whitelist
Aug  8 12:19:20.834: INFO: namespace e2e-tests-kubectl-ftz7m deletion completed in 22.343563466s

• [SLOW TEST:27.514 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:19:20.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Aug  8 12:19:20.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pnb5z'
Aug  8 12:19:21.235: INFO: stderr: ""
Aug  8 12:19:21.235: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Aug  8 12:19:22.240: INFO: Selector matched 1 pods for map[app:redis]
Aug  8 12:19:22.240: INFO: Found 0 / 1
Aug  8 12:19:23.240: INFO: Selector matched 1 pods for map[app:redis]
Aug  8 12:19:23.240: INFO: Found 0 / 1
Aug  8 12:19:24.241: INFO: Selector matched 1 pods for map[app:redis]
Aug  8 12:19:24.241: INFO: Found 0 / 1
Aug  8 12:19:25.241: INFO: Selector matched 1 pods for map[app:redis]
Aug  8 12:19:25.241: INFO: Found 0 / 1
Aug  8 12:19:26.239: INFO: Selector matched 1 pods for map[app:redis]
Aug  8 12:19:26.240: INFO: Found 1 / 1
Aug  8 12:19:26.240: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug  8 12:19:26.243: INFO: Selector matched 1 pods for map[app:redis]
Aug  8 12:19:26.243: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug  8 12:19:26.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-flmss redis-master --namespace=e2e-tests-kubectl-pnb5z'
Aug  8 12:19:26.365: INFO: stderr: ""
Aug  8 12:19:26.365: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Aug 12:19:24.476 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Aug 12:19:24.476 # Server started, Redis version 3.2.12\n1:M 08 Aug 12:19:24.476 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Aug 12:19:24.476 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug  8 12:19:26.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-flmss redis-master --namespace=e2e-tests-kubectl-pnb5z --tail=1'
Aug  8 12:19:26.470: INFO: stderr: ""
Aug  8 12:19:26.470: INFO: stdout: "1:M 08 Aug 12:19:24.476 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug  8 12:19:26.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-flmss redis-master --namespace=e2e-tests-kubectl-pnb5z --limit-bytes=1'
Aug  8 12:19:26.581: INFO: stderr: ""
Aug  8 12:19:26.581: INFO: stdout: " "
STEP: exposing timestamps
Aug  8 12:19:26.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-flmss redis-master --namespace=e2e-tests-kubectl-pnb5z --tail=1 --timestamps'
Aug  8 12:19:26.689: INFO: stderr: ""
Aug  8 12:19:26.689: INFO: stdout: "2020-08-08T12:19:24.477250275Z 1:M 08 Aug 12:19:24.476 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug  8 12:19:29.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-flmss redis-master --namespace=e2e-tests-kubectl-pnb5z --since=1s'
Aug  8 12:19:29.292: INFO: stderr: ""
Aug  8 12:19:29.292: INFO: stdout: ""
Aug  8 12:19:29.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-flmss redis-master --namespace=e2e-tests-kubectl-pnb5z --since=24h'
Aug  8 12:19:29.409: INFO: stderr: ""
Aug  8 12:19:29.409: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Aug 12:19:24.476 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Aug 12:19:24.476 # Server started, Redis version 3.2.12\n1:M 08 Aug 12:19:24.476 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Aug 12:19:24.476 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Aug  8 12:19:29.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pnb5z'
Aug  8 12:19:29.510: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  8 12:19:29.510: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug  8 12:19:29.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-pnb5z'
Aug  8 12:19:29.632: INFO: stderr: "No resources found.\n"
Aug  8 12:19:29.632: INFO: stdout: ""
Aug  8 12:19:29.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-pnb5z -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug  8 12:19:29.740: INFO: stderr: ""
Aug  8 12:19:29.740: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:19:29.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pnb5z" for this suite.
Aug  8 12:19:35.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:19:35.802: INFO: namespace: e2e-tests-kubectl-pnb5z, resource: bindings, ignored listing per whitelist
Aug  8 12:19:35.840: INFO: namespace e2e-tests-kubectl-pnb5z deletion completed in 6.097107208s

• [SLOW TEST:15.006 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:19:35.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug  8 12:19:36.027: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:36.029: INFO: Number of nodes with available pods: 0
Aug  8 12:19:36.029: INFO: Node hunter-worker is running more than one daemon pod
Aug  8 12:19:37.034: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:37.037: INFO: Number of nodes with available pods: 0
Aug  8 12:19:37.037: INFO: Node hunter-worker is running more than one daemon pod
Aug  8 12:19:38.039: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:38.043: INFO: Number of nodes with available pods: 0
Aug  8 12:19:38.043: INFO: Node hunter-worker is running more than one daemon pod
Aug  8 12:19:39.067: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:39.071: INFO: Number of nodes with available pods: 0
Aug  8 12:19:39.071: INFO: Node hunter-worker is running more than one daemon pod
Aug  8 12:19:40.034: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:40.038: INFO: Number of nodes with available pods: 1
Aug  8 12:19:40.038: INFO: Node hunter-worker is running more than one daemon pod
Aug  8 12:19:41.033: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:41.036: INFO: Number of nodes with available pods: 2
Aug  8 12:19:41.036: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug  8 12:19:41.058: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:41.061: INFO: Number of nodes with available pods: 1
Aug  8 12:19:41.061: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:42.066: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:42.070: INFO: Number of nodes with available pods: 1
Aug  8 12:19:42.070: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:43.066: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:43.069: INFO: Number of nodes with available pods: 1
Aug  8 12:19:43.069: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:44.065: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:44.069: INFO: Number of nodes with available pods: 1
Aug  8 12:19:44.069: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:45.066: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:45.069: INFO: Number of nodes with available pods: 1
Aug  8 12:19:45.069: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:46.066: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:46.070: INFO: Number of nodes with available pods: 1
Aug  8 12:19:46.070: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:47.066: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:47.069: INFO: Number of nodes with available pods: 1
Aug  8 12:19:47.069: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:48.065: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:48.068: INFO: Number of nodes with available pods: 1
Aug  8 12:19:48.068: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:49.074: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:49.077: INFO: Number of nodes with available pods: 1
Aug  8 12:19:49.077: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:50.066: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:50.069: INFO: Number of nodes with available pods: 1
Aug  8 12:19:50.069: INFO: Node hunter-worker2 is running more than one daemon pod
Aug  8 12:19:51.065: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  8 12:19:51.068: INFO: Number of nodes with available pods: 2
Aug  8 12:19:51.068: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-v8bw6, will wait for the garbage collector to delete the pods
Aug  8 12:19:51.130: INFO: Deleting DaemonSet.extensions daemon-set took: 6.061906ms
Aug  8 12:19:51.230: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.228129ms
Aug  8 12:19:57.634: INFO: Number of nodes with available pods: 0
Aug  8 12:19:57.634: INFO: Number of running nodes: 0, number of available pods: 0
Aug  8 12:19:57.637: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-v8bw6/daemonsets","resourceVersion":"5173559"},"items":null}

Aug  8 12:19:57.639: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-v8bw6/pods","resourceVersion":"5173559"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:19:57.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-v8bw6" for this suite.
Aug  8 12:20:03.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:20:03.744: INFO: namespace: e2e-tests-daemonsets-v8bw6, resource: bindings, ignored listing per whitelist
Aug  8 12:20:03.793: INFO: namespace e2e-tests-daemonsets-v8bw6 deletion completed in 6.139968954s

• [SLOW TEST:27.952 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:20:03.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-7ceeb8f4-d971-11ea-aaa1-0242ac11000c
STEP: Creating a pod to test consume secrets
Aug  8 12:20:03.898: INFO: Waiting up to 5m0s for pod "pod-secrets-7cf0bd1f-d971-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-secrets-mzt2x" to be "success or failure"
Aug  8 12:20:03.902: INFO: Pod "pod-secrets-7cf0bd1f-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.684851ms
Aug  8 12:20:05.906: INFO: Pod "pod-secrets-7cf0bd1f-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007933932s
Aug  8 12:20:07.910: INFO: Pod "pod-secrets-7cf0bd1f-d971-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012245679s
STEP: Saw pod success
Aug  8 12:20:07.910: INFO: Pod "pod-secrets-7cf0bd1f-d971-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:20:07.913: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-7cf0bd1f-d971-11ea-aaa1-0242ac11000c container secret-volume-test: 
STEP: delete the pod
Aug  8 12:20:07.967: INFO: Waiting for pod pod-secrets-7cf0bd1f-d971-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:20:07.980: INFO: Pod pod-secrets-7cf0bd1f-d971-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:20:07.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-mzt2x" for this suite.
Aug  8 12:20:13.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:20:14.023: INFO: namespace: e2e-tests-secrets-mzt2x, resource: bindings, ignored listing per whitelist
Aug  8 12:20:14.075: INFO: namespace e2e-tests-secrets-mzt2x deletion completed in 6.091598285s

• [SLOW TEST:10.282 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:20:14.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Aug  8 12:20:14.200: INFO: Waiting up to 5m0s for pod "pod-83151be6-d971-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-emptydir-l6lzl" to be "success or failure"
Aug  8 12:20:14.220: INFO: Pod "pod-83151be6-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.176957ms
Aug  8 12:20:16.228: INFO: Pod "pod-83151be6-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028469725s
Aug  8 12:20:18.232: INFO: Pod "pod-83151be6-d971-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032169216s
STEP: Saw pod success
Aug  8 12:20:18.232: INFO: Pod "pod-83151be6-d971-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:20:18.265: INFO: Trying to get logs from node hunter-worker pod pod-83151be6-d971-11ea-aaa1-0242ac11000c container test-container: 
STEP: delete the pod
Aug  8 12:20:18.356: INFO: Waiting for pod pod-83151be6-d971-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:20:18.359: INFO: Pod pod-83151be6-d971-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:20:18.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l6lzl" for this suite.
Aug  8 12:20:24.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:20:24.475: INFO: namespace: e2e-tests-emptydir-l6lzl, resource: bindings, ignored listing per whitelist
Aug  8 12:20:24.498: INFO: namespace e2e-tests-emptydir-l6lzl deletion completed in 6.135742563s

• [SLOW TEST:10.423 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:20:24.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Aug  8 12:20:24.643: INFO: Waiting up to 5m0s for pod "client-containers-894d934b-d971-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-containers-94r28" to be "success or failure"
Aug  8 12:20:24.647: INFO: Pod "client-containers-894d934b-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.788316ms
Aug  8 12:20:26.651: INFO: Pod "client-containers-894d934b-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007840122s
Aug  8 12:20:28.654: INFO: Pod "client-containers-894d934b-d971-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011347401s
STEP: Saw pod success
Aug  8 12:20:28.654: INFO: Pod "client-containers-894d934b-d971-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:20:28.657: INFO: Trying to get logs from node hunter-worker2 pod client-containers-894d934b-d971-11ea-aaa1-0242ac11000c container test-container: 
STEP: delete the pod
Aug  8 12:20:28.759: INFO: Waiting for pod client-containers-894d934b-d971-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:20:28.795: INFO: Pod client-containers-894d934b-d971-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:20:28.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-94r28" for this suite.
Aug  8 12:20:34.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:20:35.048: INFO: namespace: e2e-tests-containers-94r28, resource: bindings, ignored listing per whitelist
Aug  8 12:20:35.054: INFO: namespace e2e-tests-containers-94r28 deletion completed in 6.255611352s

• [SLOW TEST:10.556 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:20:35.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug  8 12:20:35.211: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-896mt,SelfLink:/api/v1/namespaces/e2e-tests-watch-896mt/configmaps/e2e-watch-test-label-changed,UID:8f92ca10-d971-11ea-b2c9-0242ac120008,ResourceVersion:5173732,Generation:0,CreationTimestamp:2020-08-08 12:20:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug  8 12:20:35.211: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-896mt,SelfLink:/api/v1/namespaces/e2e-tests-watch-896mt/configmaps/e2e-watch-test-label-changed,UID:8f92ca10-d971-11ea-b2c9-0242ac120008,ResourceVersion:5173733,Generation:0,CreationTimestamp:2020-08-08 12:20:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug  8 12:20:35.211: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-896mt,SelfLink:/api/v1/namespaces/e2e-tests-watch-896mt/configmaps/e2e-watch-test-label-changed,UID:8f92ca10-d971-11ea-b2c9-0242ac120008,ResourceVersion:5173734,Generation:0,CreationTimestamp:2020-08-08 12:20:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug  8 12:20:45.253: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-896mt,SelfLink:/api/v1/namespaces/e2e-tests-watch-896mt/configmaps/e2e-watch-test-label-changed,UID:8f92ca10-d971-11ea-b2c9-0242ac120008,ResourceVersion:5173755,Generation:0,CreationTimestamp:2020-08-08 12:20:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug  8 12:20:45.253: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-896mt,SelfLink:/api/v1/namespaces/e2e-tests-watch-896mt/configmaps/e2e-watch-test-label-changed,UID:8f92ca10-d971-11ea-b2c9-0242ac120008,ResourceVersion:5173756,Generation:0,CreationTimestamp:2020-08-08 12:20:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug  8 12:20:45.253: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-896mt,SelfLink:/api/v1/namespaces/e2e-tests-watch-896mt/configmaps/e2e-watch-test-label-changed,UID:8f92ca10-d971-11ea-b2c9-0242ac120008,ResourceVersion:5173757,Generation:0,CreationTimestamp:2020-08-08 12:20:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:20:45.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-896mt" for this suite.
Aug  8 12:20:51.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:20:51.376: INFO: namespace: e2e-tests-watch-896mt, resource: bindings, ignored listing per whitelist
Aug  8 12:20:51.389: INFO: namespace e2e-tests-watch-896mt deletion completed in 6.087204825s

• [SLOW TEST:16.335 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:20:51.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-szrcn
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug  8 12:20:51.514: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug  8 12:21:19.685: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.99:8080/dial?request=hostName&protocol=http&host=10.244.1.98&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-szrcn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  8 12:21:19.685: INFO: >>> kubeConfig: /root/.kube/config
I0808 12:21:19.719391       6 log.go:172] (0xc00093b8c0) (0xc0009d1040) Create stream
I0808 12:21:19.719425       6 log.go:172] (0xc00093b8c0) (0xc0009d1040) Stream added, broadcasting: 1
I0808 12:21:19.721138       6 log.go:172] (0xc00093b8c0) Reply frame received for 1
I0808 12:21:19.721178       6 log.go:172] (0xc00093b8c0) (0xc0009d1180) Create stream
I0808 12:21:19.721194       6 log.go:172] (0xc00093b8c0) (0xc0009d1180) Stream added, broadcasting: 3
I0808 12:21:19.722250       6 log.go:172] (0xc00093b8c0) Reply frame received for 3
I0808 12:21:19.722295       6 log.go:172] (0xc00093b8c0) (0xc0019c7cc0) Create stream
I0808 12:21:19.722311       6 log.go:172] (0xc00093b8c0) (0xc0019c7cc0) Stream added, broadcasting: 5
I0808 12:21:19.723139       6 log.go:172] (0xc00093b8c0) Reply frame received for 5
I0808 12:21:19.881764       6 log.go:172] (0xc00093b8c0) Data frame received for 3
I0808 12:21:19.881802       6 log.go:172] (0xc0009d1180) (3) Data frame handling
I0808 12:21:19.881822       6 log.go:172] (0xc0009d1180) (3) Data frame sent
I0808 12:21:19.882631       6 log.go:172] (0xc00093b8c0) Data frame received for 3
I0808 12:21:19.882674       6 log.go:172] (0xc0009d1180) (3) Data frame handling
I0808 12:21:19.882948       6 log.go:172] (0xc00093b8c0) Data frame received for 5
I0808 12:21:19.882979       6 log.go:172] (0xc0019c7cc0) (5) Data frame handling
I0808 12:21:19.885348       6 log.go:172] (0xc00093b8c0) Data frame received for 1
I0808 12:21:19.885424       6 log.go:172] (0xc0009d1040) (1) Data frame handling
I0808 12:21:19.885507       6 log.go:172] (0xc0009d1040) (1) Data frame sent
I0808 12:21:19.885541       6 log.go:172] (0xc00093b8c0) (0xc0009d1040) Stream removed, broadcasting: 1
I0808 12:21:19.885625       6 log.go:172] (0xc00093b8c0) Go away received
I0808 12:21:19.885731       6 log.go:172] (0xc00093b8c0) (0xc0009d1040) Stream removed, broadcasting: 1
I0808 12:21:19.885760       6 log.go:172] (0xc00093b8c0) (0xc0009d1180) Stream removed, broadcasting: 3
I0808 12:21:19.885772       6 log.go:172] (0xc00093b8c0) (0xc0019c7cc0) Stream removed, broadcasting: 5
Aug  8 12:21:19.885: INFO: Waiting for endpoints: map[]
Aug  8 12:21:19.889: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.99:8080/dial?request=hostName&protocol=http&host=10.244.2.181&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-szrcn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  8 12:21:19.889: INFO: >>> kubeConfig: /root/.kube/config
I0808 12:21:19.921795       6 log.go:172] (0xc000e3e580) (0xc0022e5540) Create stream
I0808 12:21:19.921828       6 log.go:172] (0xc000e3e580) (0xc0022e5540) Stream added, broadcasting: 1
I0808 12:21:19.924378       6 log.go:172] (0xc000e3e580) Reply frame received for 1
I0808 12:21:19.924414       6 log.go:172] (0xc000e3e580) (0xc00188b860) Create stream
I0808 12:21:19.924428       6 log.go:172] (0xc000e3e580) (0xc00188b860) Stream added, broadcasting: 3
I0808 12:21:19.925581       6 log.go:172] (0xc000e3e580) Reply frame received for 3
I0808 12:21:19.925643       6 log.go:172] (0xc000e3e580) (0xc00188b900) Create stream
I0808 12:21:19.925660       6 log.go:172] (0xc000e3e580) (0xc00188b900) Stream added, broadcasting: 5
I0808 12:21:19.926598       6 log.go:172] (0xc000e3e580) Reply frame received for 5
I0808 12:21:20.002507       6 log.go:172] (0xc000e3e580) Data frame received for 3
I0808 12:21:20.002552       6 log.go:172] (0xc00188b860) (3) Data frame handling
I0808 12:21:20.002580       6 log.go:172] (0xc00188b860) (3) Data frame sent
I0808 12:21:20.003133       6 log.go:172] (0xc000e3e580) Data frame received for 5
I0808 12:21:20.003153       6 log.go:172] (0xc00188b900) (5) Data frame handling
I0808 12:21:20.003235       6 log.go:172] (0xc000e3e580) Data frame received for 3
I0808 12:21:20.003277       6 log.go:172] (0xc00188b860) (3) Data frame handling
I0808 12:21:20.004420       6 log.go:172] (0xc000e3e580) Data frame received for 1
I0808 12:21:20.004445       6 log.go:172] (0xc0022e5540) (1) Data frame handling
I0808 12:21:20.004465       6 log.go:172] (0xc0022e5540) (1) Data frame sent
I0808 12:21:20.004476       6 log.go:172] (0xc000e3e580) (0xc0022e5540) Stream removed, broadcasting: 1
I0808 12:21:20.004499       6 log.go:172] (0xc000e3e580) Go away received
I0808 12:21:20.004564       6 log.go:172] (0xc000e3e580) (0xc0022e5540) Stream removed, broadcasting: 1
I0808 12:21:20.004575       6 log.go:172] (0xc000e3e580) (0xc00188b860) Stream removed, broadcasting: 3
I0808 12:21:20.004580       6 log.go:172] (0xc000e3e580) (0xc00188b900) Stream removed, broadcasting: 5
Aug  8 12:21:20.004: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:21:20.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-szrcn" for this suite.
Aug  8 12:21:44.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:21:44.085: INFO: namespace: e2e-tests-pod-network-test-szrcn, resource: bindings, ignored listing per whitelist
Aug  8 12:21:44.102: INFO: namespace e2e-tests-pod-network-test-szrcn deletion completed in 24.094665073s

• [SLOW TEST:52.713 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:21:44.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug  8 12:21:52.272: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:21:52.277: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:21:54.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:21:54.282: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:21:56.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:21:56.282: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:21:58.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:21:58.281: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:00.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:00.282: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:02.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:02.281: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:04.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:04.281: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:06.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:06.284: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:08.278: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:08.283: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:10.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:10.285: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:12.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:12.284: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:14.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:14.282: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:16.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:16.282: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  8 12:22:18.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  8 12:22:18.281: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:22:18.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-n5hdc" for this suite.
Aug  8 12:22:40.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:22:40.328: INFO: namespace: e2e-tests-container-lifecycle-hook-n5hdc, resource: bindings, ignored listing per whitelist
Aug  8 12:22:40.396: INFO: namespace e2e-tests-container-lifecycle-hook-n5hdc deletion completed in 22.104897038s

• [SLOW TEST:56.293 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:22:40.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-da4bff47-d971-11ea-aaa1-0242ac11000c
STEP: Creating a pod to test consume secrets
Aug  8 12:22:40.543: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da4c6649-d971-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-xsrtm" to be "success or failure"
Aug  8 12:22:40.553: INFO: Pod "pod-projected-secrets-da4c6649-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219097ms
Aug  8 12:22:42.558: INFO: Pod "pod-projected-secrets-da4c6649-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014390032s
Aug  8 12:22:44.561: INFO: Pod "pod-projected-secrets-da4c6649-d971-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018081314s
STEP: Saw pod success
Aug  8 12:22:44.561: INFO: Pod "pod-projected-secrets-da4c6649-d971-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:22:44.564: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-da4c6649-d971-11ea-aaa1-0242ac11000c container projected-secret-volume-test: 
STEP: delete the pod
Aug  8 12:22:44.584: INFO: Waiting for pod pod-projected-secrets-da4c6649-d971-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:22:44.613: INFO: Pod pod-projected-secrets-da4c6649-d971-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:22:44.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xsrtm" for this suite.
Aug  8 12:22:50.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:22:50.728: INFO: namespace: e2e-tests-projected-xsrtm, resource: bindings, ignored listing per whitelist
Aug  8 12:22:50.769: INFO: namespace e2e-tests-projected-xsrtm deletion completed in 6.152529433s

• [SLOW TEST:10.373 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:22:50.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-e07ae4c2-d971-11ea-aaa1-0242ac11000c
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:22:56.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-frfth" for this suite.
Aug  8 12:23:18.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:23:18.986: INFO: namespace: e2e-tests-configmap-frfth, resource: bindings, ignored listing per whitelist
Aug  8 12:23:19.039: INFO: namespace e2e-tests-configmap-frfth deletion completed in 22.100666783s

• [SLOW TEST:28.270 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:23:19.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Aug  8 12:23:19.147: INFO: Waiting up to 5m0s for pod "client-containers-f14f748c-d971-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-containers-5pqsf" to be "success or failure"
Aug  8 12:23:19.153: INFO: Pod "client-containers-f14f748c-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.513114ms
Aug  8 12:23:21.159: INFO: Pod "client-containers-f14f748c-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012074007s
Aug  8 12:23:23.162: INFO: Pod "client-containers-f14f748c-d971-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014553744s
STEP: Saw pod success
Aug  8 12:23:23.162: INFO: Pod "client-containers-f14f748c-d971-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:23:23.163: INFO: Trying to get logs from node hunter-worker pod client-containers-f14f748c-d971-11ea-aaa1-0242ac11000c container test-container: 
STEP: delete the pod
Aug  8 12:23:23.209: INFO: Waiting for pod client-containers-f14f748c-d971-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:23:23.219: INFO: Pod client-containers-f14f748c-d971-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:23:23.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-5pqsf" for this suite.
Aug  8 12:23:29.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:23:29.238: INFO: namespace: e2e-tests-containers-5pqsf, resource: bindings, ignored listing per whitelist
Aug  8 12:23:29.328: INFO: namespace e2e-tests-containers-5pqsf deletion completed in 6.106137183s

• [SLOW TEST:10.288 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:23:29.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Aug  8 12:23:29.461: INFO: Waiting up to 5m0s for pod "var-expansion-f773df74-d971-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-var-expansion-s2pxt" to be "success or failure"
Aug  8 12:23:29.502: INFO: Pod "var-expansion-f773df74-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.535106ms
Aug  8 12:23:31.506: INFO: Pod "var-expansion-f773df74-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044487426s
Aug  8 12:23:33.510: INFO: Pod "var-expansion-f773df74-d971-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0486563s
STEP: Saw pod success
Aug  8 12:23:33.510: INFO: Pod "var-expansion-f773df74-d971-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:23:33.513: INFO: Trying to get logs from node hunter-worker pod var-expansion-f773df74-d971-11ea-aaa1-0242ac11000c container dapi-container: 
STEP: delete the pod
Aug  8 12:23:33.592: INFO: Waiting for pod var-expansion-f773df74-d971-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:23:33.597: INFO: Pod var-expansion-f773df74-d971-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:23:33.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-s2pxt" for this suite.
Aug  8 12:23:39.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:23:39.673: INFO: namespace: e2e-tests-var-expansion-s2pxt, resource: bindings, ignored listing per whitelist
Aug  8 12:23:39.719: INFO: namespace e2e-tests-var-expansion-s2pxt deletion completed in 6.11847195s

• [SLOW TEST:10.390 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:23:39.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-fdad7171-d971-11ea-aaa1-0242ac11000c
STEP: Creating a pod to test consume configMaps
Aug  8 12:23:39.887: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fdadfa84-d971-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-m2n7h" to be "success or failure"
Aug  8 12:23:39.903: INFO: Pod "pod-projected-configmaps-fdadfa84-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.001135ms
Aug  8 12:23:41.907: INFO: Pod "pod-projected-configmaps-fdadfa84-d971-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019820594s
Aug  8 12:23:43.911: INFO: Pod "pod-projected-configmaps-fdadfa84-d971-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023963244s
STEP: Saw pod success
Aug  8 12:23:43.911: INFO: Pod "pod-projected-configmaps-fdadfa84-d971-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:23:43.913: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-fdadfa84-d971-11ea-aaa1-0242ac11000c container projected-configmap-volume-test: 
STEP: delete the pod
Aug  8 12:23:44.296: INFO: Waiting for pod pod-projected-configmaps-fdadfa84-d971-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:23:44.309: INFO: Pod pod-projected-configmaps-fdadfa84-d971-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:23:44.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m2n7h" for this suite.
Aug  8 12:23:50.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:23:50.336: INFO: namespace: e2e-tests-projected-m2n7h, resource: bindings, ignored listing per whitelist
Aug  8 12:23:50.394: INFO: namespace e2e-tests-projected-m2n7h deletion completed in 6.080487608s

• [SLOW TEST:10.675 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:23:50.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-gpqrh
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-gpqrh
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-gpqrh
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-gpqrh
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-gpqrh
Aug  8 12:23:54.675: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-gpqrh, name: ss-0, uid: 05e58a1e-d972-11ea-b2c9-0242ac120008, status phase: Pending. Waiting for statefulset controller to delete.
Aug  8 12:23:54.843: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-gpqrh, name: ss-0, uid: 05e58a1e-d972-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete.
Aug  8 12:23:54.886: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-gpqrh, name: ss-0, uid: 05e58a1e-d972-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete.
Aug  8 12:23:54.897: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-gpqrh
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-gpqrh
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-gpqrh and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug  8 12:23:58.959: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gpqrh
Aug  8 12:23:58.963: INFO: Scaling statefulset ss to 0
Aug  8 12:24:08.984: INFO: Waiting for statefulset status.replicas updated to 0
Aug  8 12:24:08.987: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:24:09.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-gpqrh" for this suite.
Aug  8 12:24:15.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:24:15.100: INFO: namespace: e2e-tests-statefulset-gpqrh, resource: bindings, ignored listing per whitelist
Aug  8 12:24:15.168: INFO: namespace e2e-tests-statefulset-gpqrh deletion completed in 6.09562149s

• [SLOW TEST:24.773 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:24:15.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-w9jt2
Aug  8 12:24:19.289: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-w9jt2
STEP: checking the pod's current state and verifying that restartCount is present
Aug  8 12:24:19.292: INFO: Initial restart count of pod liveness-http is 0
Aug  8 12:24:41.341: INFO: Restart count of pod e2e-tests-container-probe-w9jt2/liveness-http is now 1 (22.04877593s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:24:41.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-w9jt2" for this suite.
Aug  8 12:24:47.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:24:47.429: INFO: namespace: e2e-tests-container-probe-w9jt2, resource: bindings, ignored listing per whitelist
Aug  8 12:24:47.497: INFO: namespace e2e-tests-container-probe-w9jt2 deletion completed in 6.111926816s

• [SLOW TEST:32.329 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:24:47.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug  8 12:24:47.587: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:24:54.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-6fp8s" for this suite.
Aug  8 12:25:00.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:25:00.151: INFO: namespace: e2e-tests-kubelet-test-6fp8s, resource: bindings, ignored listing per whitelist
Aug  8 12:25:00.166: INFO: namespace e2e-tests-kubelet-test-6fp8s deletion completed in 6.122916803s

• [SLOW TEST:6.348 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:25:00.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-2d98222d-d972-11ea-aaa1-0242ac11000c
STEP: Creating a pod to test consume secrets
Aug  8 12:25:00.289: INFO: Waiting up to 5m0s for pod "pod-secrets-2d9a76d0-d972-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-secrets-4dbjk" to be "success or failure"
Aug  8 12:25:00.341: INFO: Pod "pod-secrets-2d9a76d0-d972-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 52.122184ms
Aug  8 12:25:02.346: INFO: Pod "pod-secrets-2d9a76d0-d972-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05664204s
Aug  8 12:25:04.349: INFO: Pod "pod-secrets-2d9a76d0-d972-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060466s
STEP: Saw pod success
Aug  8 12:25:04.350: INFO: Pod "pod-secrets-2d9a76d0-d972-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:25:04.352: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-2d9a76d0-d972-11ea-aaa1-0242ac11000c container secret-volume-test: 
STEP: delete the pod
Aug  8 12:25:04.379: INFO: Waiting for pod pod-secrets-2d9a76d0-d972-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:25:04.396: INFO: Pod pod-secrets-2d9a76d0-d972-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:25:04.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4dbjk" for this suite.
Aug  8 12:25:10.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:25:10.469: INFO: namespace: e2e-tests-secrets-4dbjk, resource: bindings, ignored listing per whitelist
Aug  8 12:25:10.486: INFO: namespace e2e-tests-secrets-4dbjk deletion completed in 6.088138315s

• [SLOW TEST:10.320 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:25:10.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:25:14.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-8twrl" for this suite.
Aug  8 12:25:52.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:25:52.762: INFO: namespace: e2e-tests-kubelet-test-8twrl, resource: bindings, ignored listing per whitelist
Aug  8 12:25:52.799: INFO: namespace e2e-tests-kubelet-test-8twrl deletion completed in 38.117309085s

• [SLOW TEST:42.313 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:25:52.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Aug  8 12:25:52.884: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug  8 12:25:52.898: INFO: Waiting for terminating namespaces to be deleted...
Aug  8 12:25:52.901: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Aug  8 12:25:52.907: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Aug  8 12:25:52.907: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  8 12:25:52.907: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Aug  8 12:25:52.907: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug  8 12:25:52.907: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Aug  8 12:25:52.914: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Aug  8 12:25:52.914: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug  8 12:25:52.914: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Aug  8 12:25:52.914: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.16294a9675a4e92f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:25:53.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-zlhzw" for this suite.
Aug  8 12:25:59.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:26:00.017: INFO: namespace: e2e-tests-sched-pred-zlhzw, resource: bindings, ignored listing per whitelist
Aug  8 12:26:00.061: INFO: namespace e2e-tests-sched-pred-zlhzw deletion completed in 6.095694088s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.262 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:26:00.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Aug  8 12:26:00.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:00.477: INFO: stderr: ""
Aug  8 12:26:00.477: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  8 12:26:00.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:00.589: INFO: stderr: ""
Aug  8 12:26:00.589: INFO: stdout: "update-demo-nautilus-4ltw2 update-demo-nautilus-clt6c "
Aug  8 12:26:00.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ltw2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:00.683: INFO: stderr: ""
Aug  8 12:26:00.683: INFO: stdout: ""
Aug  8 12:26:00.683: INFO: update-demo-nautilus-4ltw2 is created but not running
Aug  8 12:26:05.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:05.802: INFO: stderr: ""
Aug  8 12:26:05.802: INFO: stdout: "update-demo-nautilus-4ltw2 update-demo-nautilus-clt6c "
Aug  8 12:26:05.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ltw2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:05.898: INFO: stderr: ""
Aug  8 12:26:05.898: INFO: stdout: "true"
Aug  8 12:26:05.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ltw2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:05.999: INFO: stderr: ""
Aug  8 12:26:05.999: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  8 12:26:05.999: INFO: validating pod update-demo-nautilus-4ltw2
Aug  8 12:26:06.010: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  8 12:26:06.010: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  8 12:26:06.010: INFO: update-demo-nautilus-4ltw2 is verified up and running
Aug  8 12:26:06.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clt6c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:06.110: INFO: stderr: ""
Aug  8 12:26:06.110: INFO: stdout: "true"
Aug  8 12:26:06.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clt6c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:06.195: INFO: stderr: ""
Aug  8 12:26:06.195: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  8 12:26:06.195: INFO: validating pod update-demo-nautilus-clt6c
Aug  8 12:26:06.208: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  8 12:26:06.208: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  8 12:26:06.208: INFO: update-demo-nautilus-clt6c is verified up and running
STEP: rolling-update to new replication controller
Aug  8 12:26:06.211: INFO: scanned /root for discovery docs: 
Aug  8 12:26:06.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:28.800: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug  8 12:26:28.800: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  8 12:26:28.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:28.918: INFO: stderr: ""
Aug  8 12:26:28.918: INFO: stdout: "update-demo-kitten-2k4k7 update-demo-kitten-x4hb5 "
Aug  8 12:26:28.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2k4k7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:29.020: INFO: stderr: ""
Aug  8 12:26:29.020: INFO: stdout: "true"
Aug  8 12:26:29.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2k4k7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:29.126: INFO: stderr: ""
Aug  8 12:26:29.126: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug  8 12:26:29.126: INFO: validating pod update-demo-kitten-2k4k7
Aug  8 12:26:29.130: INFO: got data: {
  "image": "kitten.jpg"
}

Aug  8 12:26:29.130: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug  8 12:26:29.130: INFO: update-demo-kitten-2k4k7 is verified up and running
Aug  8 12:26:29.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x4hb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:29.228: INFO: stderr: ""
Aug  8 12:26:29.228: INFO: stdout: "true"
Aug  8 12:26:29.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x4hb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-plmz9'
Aug  8 12:26:29.338: INFO: stderr: ""
Aug  8 12:26:29.338: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug  8 12:26:29.338: INFO: validating pod update-demo-kitten-x4hb5
Aug  8 12:26:29.342: INFO: got data: {
  "image": "kitten.jpg"
}

Aug  8 12:26:29.342: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug  8 12:26:29.342: INFO: update-demo-kitten-x4hb5 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:26:29.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-plmz9" for this suite.
Aug  8 12:26:53.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:26:53.390: INFO: namespace: e2e-tests-kubectl-plmz9, resource: bindings, ignored listing per whitelist
Aug  8 12:26:53.460: INFO: namespace e2e-tests-kubectl-plmz9 deletion completed in 24.115280662s

• [SLOW TEST:53.398 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:26:53.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Aug  8 12:26:53.545: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix682753721/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:26:53.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-25hvw" for this suite.
Aug  8 12:26:59.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:26:59.664: INFO: namespace: e2e-tests-kubectl-25hvw, resource: bindings, ignored listing per whitelist
Aug  8 12:26:59.699: INFO: namespace e2e-tests-kubectl-25hvw deletion completed in 6.088190126s

• [SLOW TEST:6.239 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:26:59.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-74df7b3a-d972-11ea-aaa1-0242ac11000c
STEP: Creating a pod to test consume secrets
Aug  8 12:26:59.926: INFO: Waiting up to 5m0s for pod "pod-secrets-74e34eb5-d972-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-secrets-prwrj" to be "success or failure"
Aug  8 12:26:59.961: INFO: Pod "pod-secrets-74e34eb5-d972-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.319372ms
Aug  8 12:27:01.965: INFO: Pod "pod-secrets-74e34eb5-d972-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038779937s
Aug  8 12:27:03.970: INFO: Pod "pod-secrets-74e34eb5-d972-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043669674s
STEP: Saw pod success
Aug  8 12:27:03.970: INFO: Pod "pod-secrets-74e34eb5-d972-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:27:03.973: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-74e34eb5-d972-11ea-aaa1-0242ac11000c container secret-volume-test: 
STEP: delete the pod
Aug  8 12:27:04.010: INFO: Waiting for pod pod-secrets-74e34eb5-d972-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:27:04.133: INFO: Pod pod-secrets-74e34eb5-d972-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:27:04.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-prwrj" for this suite.
Aug  8 12:27:10.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:27:10.354: INFO: namespace: e2e-tests-secrets-prwrj, resource: bindings, ignored listing per whitelist
Aug  8 12:27:10.377: INFO: namespace e2e-tests-secrets-prwrj deletion completed in 6.238724194s

• [SLOW TEST:10.678 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:27:10.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-7b3ea264-d972-11ea-aaa1-0242ac11000c
STEP: Creating configMap with name cm-test-opt-upd-7b3ea2cf-d972-11ea-aaa1-0242ac11000c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-7b3ea264-d972-11ea-aaa1-0242ac11000c
STEP: Updating configmap cm-test-opt-upd-7b3ea2cf-d972-11ea-aaa1-0242ac11000c
STEP: Creating configMap with name cm-test-opt-create-7b3ea30a-d972-11ea-aaa1-0242ac11000c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:27:18.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vtjrd" for this suite.
Aug  8 12:27:40.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:27:40.791: INFO: namespace: e2e-tests-configmap-vtjrd, resource: bindings, ignored listing per whitelist
Aug  8 12:27:40.831: INFO: namespace e2e-tests-configmap-vtjrd deletion completed in 22.13588253s

• [SLOW TEST:30.453 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:27:40.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug  8 12:27:41.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d6bbab3-d972-11ea-aaa1-0242ac11000c" in namespace "e2e-tests-projected-j6tq4" to be "success or failure"
Aug  8 12:27:41.101: INFO: Pod "downwardapi-volume-8d6bbab3-d972-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.028678ms
Aug  8 12:27:43.194: INFO: Pod "downwardapi-volume-8d6bbab3-d972-11ea-aaa1-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138919091s
Aug  8 12:27:45.230: INFO: Pod "downwardapi-volume-8d6bbab3-d972-11ea-aaa1-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174702938s
STEP: Saw pod success
Aug  8 12:27:45.230: INFO: Pod "downwardapi-volume-8d6bbab3-d972-11ea-aaa1-0242ac11000c" satisfied condition "success or failure"
Aug  8 12:27:45.233: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-8d6bbab3-d972-11ea-aaa1-0242ac11000c container client-container: 
STEP: delete the pod
Aug  8 12:27:45.256: INFO: Waiting for pod downwardapi-volume-8d6bbab3-d972-11ea-aaa1-0242ac11000c to disappear
Aug  8 12:27:45.278: INFO: Pod downwardapi-volume-8d6bbab3-d972-11ea-aaa1-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:27:45.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j6tq4" for this suite.
Aug  8 12:27:53.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:27:53.357: INFO: namespace: e2e-tests-projected-j6tq4, resource: bindings, ignored listing per whitelist
Aug  8 12:27:53.373: INFO: namespace e2e-tests-projected-j6tq4 deletion completed in 8.081271377s

• [SLOW TEST:12.542 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug  8 12:27:53.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug  8 12:27:53.587: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hk7kb,SelfLink:/api/v1/namespaces/e2e-tests-watch-hk7kb/configmaps/e2e-watch-test-resource-version,UID:94d51224-d972-11ea-b2c9-0242ac120008,ResourceVersion:5175306,Generation:0,CreationTimestamp:2020-08-08 12:27:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug  8 12:27:53.587: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hk7kb,SelfLink:/api/v1/namespaces/e2e-tests-watch-hk7kb/configmaps/e2e-watch-test-resource-version,UID:94d51224-d972-11ea-b2c9-0242ac120008,ResourceVersion:5175308,Generation:0,CreationTimestamp:2020-08-08 12:27:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug  8 12:27:53.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-hk7kb" for this suite.
Aug  8 12:27:59.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  8 12:27:59.788: INFO: namespace: e2e-tests-watch-hk7kb, resource: bindings, ignored listing per whitelist
Aug  8 12:27:59.819: INFO: namespace e2e-tests-watch-hk7kb deletion completed in 6.222345475s

• [SLOW TEST:6.446 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug  8 12:27:59.820: INFO: Running AfterSuite actions on all nodes
Aug  8 12:27:59.820: INFO: Running AfterSuite actions on node 1
Aug  8 12:27:59.820: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6056.165 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS