I0315 20:16:38.078506 6 e2e.go:224] Starting e2e run "dfc500a4-66f9-11ea-9ccf-0242ac110012" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584303397 - Will randomize all specs Will run 201 of 2164 specs Mar 15 20:16:38.256: INFO: >>> kubeConfig: /root/.kube/config Mar 15 20:16:38.259: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 15 20:16:38.272: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 15 20:16:38.326: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 15 20:16:38.326: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 15 20:16:38.326: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 15 20:16:38.332: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 15 20:16:38.332: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 15 20:16:38.332: INFO: e2e test version: v1.13.12 Mar 15 20:16:38.333: INFO: kube-apiserver version: v1.13.12 [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:16:38.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Mar 15 20:16:38.428: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-e04db731-66f9-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 20:16:38.440: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-dd9hz" to be "success or failure" Mar 15 20:16:38.444: INFO: Pod "pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.9921ms Mar 15 20:16:40.538: INFO: Pod "pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098302726s Mar 15 20:16:42.541: INFO: Pod "pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101600628s Mar 15 20:16:44.544: INFO: Pod "pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104370775s Mar 15 20:16:46.548: INFO: Pod "pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108013342s Mar 15 20:16:48.550: INFO: Pod "pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 10.110887504s Mar 15 20:16:50.554: INFO: Pod "pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.114221874s STEP: Saw pod success Mar 15 20:16:50.554: INFO: Pod "pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:16:50.556: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012 container projected-secret-volume-test: STEP: delete the pod Mar 15 20:16:50.593: INFO: Waiting for pod pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012 to disappear Mar 15 20:16:50.616: INFO: Pod pod-projected-secrets-e04e1784-66f9-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:16:50.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dd9hz" for this suite. Mar 15 20:16:56.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:16:56.775: INFO: namespace: e2e-tests-projected-dd9hz, resource: bindings, ignored listing per whitelist Mar 15 20:16:56.806: INFO: namespace e2e-tests-projected-dd9hz deletion completed in 6.187516467s • [SLOW TEST:18.474 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:16:56.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-wvchd [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Mar 15 20:16:57.240: INFO: Found 0 stateful pods, waiting for 3 Mar 15 20:17:07.244: INFO: Found 1 stateful pods, waiting for 3 Mar 15 20:17:17.292: INFO: Found 2 stateful pods, waiting for 3 Mar 15 20:17:27.244: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:17:27.244: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:17:27.244: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:17:27.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wvchd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:17:27.459: INFO: stderr: "I0315 20:17:27.373869 37 log.go:172] (0xc000782160) (0xc00070a5a0) Create stream\nI0315 20:17:27.373926 37 log.go:172] (0xc000782160) (0xc00070a5a0) Stream added, broadcasting: 1\nI0315 20:17:27.375850 37 log.go:172] (0xc000782160) Reply frame received for 1\nI0315 20:17:27.375892 37 log.go:172] (0xc000782160) (0xc00038ed20) Create stream\nI0315 20:17:27.375908 37 log.go:172] (0xc000782160) (0xc00038ed20) Stream added, broadcasting: 3\nI0315 20:17:27.376698 37 log.go:172] (0xc000782160) Reply frame received for 3\nI0315 20:17:27.376743 37 log.go:172] (0xc000782160) (0xc0004b6000) Create stream\nI0315 20:17:27.376756 37 log.go:172] (0xc000782160) (0xc0004b6000) Stream added, broadcasting: 5\nI0315 20:17:27.377692 37 log.go:172] (0xc000782160) Reply frame received for 5\nI0315 20:17:27.452386 37 log.go:172] (0xc000782160) Data frame received for 3\nI0315 20:17:27.452411 37 log.go:172] (0xc00038ed20) (3) Data frame handling\nI0315 20:17:27.452421 37 log.go:172] (0xc00038ed20) (3) Data frame sent\nI0315 20:17:27.452683 37 log.go:172] (0xc000782160) Data frame received for 3\nI0315 20:17:27.452732 37 log.go:172] (0xc00038ed20) (3) Data frame handling\nI0315 20:17:27.452824 37 log.go:172] (0xc000782160) Data frame received for 5\nI0315 20:17:27.452850 37 log.go:172] (0xc0004b6000) (5) Data frame handling\nI0315 20:17:27.454118 37 log.go:172] (0xc000782160) Data frame received for 1\nI0315 20:17:27.454128 37 log.go:172] (0xc00070a5a0) (1) Data frame handling\nI0315 20:17:27.454134 37 log.go:172] (0xc00070a5a0) (1) Data frame sent\nI0315 20:17:27.454297 37 log.go:172] (0xc000782160) (0xc00070a5a0) Stream removed, broadcasting: 1\nI0315 20:17:27.454325 37 log.go:172] (0xc000782160) Go away received\nI0315 20:17:27.454489 37 log.go:172] (0xc000782160) (0xc00070a5a0) Stream removed, broadcasting: 1\nI0315 20:17:27.454499 37 log.go:172] (0xc000782160) (0xc00038ed20) Stream removed, broadcasting: 3\nI0315 20:17:27.454505 37 log.go:172] (0xc000782160) (0xc0004b6000) Stream removed, broadcasting: 5\n" Mar 15 20:17:27.459: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:17:27.459: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 15 20:17:38.303: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 15 20:17:48.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wvchd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 20:17:48.628: INFO: stderr: "I0315 20:17:48.514778 60 log.go:172] (0xc00014c840) (0xc0007fa640) Create stream\nI0315 20:17:48.514837 60 log.go:172] (0xc00014c840) (0xc0007fa640) Stream added, broadcasting: 1\nI0315 20:17:48.517457 60 log.go:172] (0xc00014c840) Reply frame received for 1\nI0315 20:17:48.517497 60 log.go:172] (0xc00014c840) (0xc000710be0) Create stream\nI0315 20:17:48.517506 60 log.go:172] (0xc00014c840) (0xc000710be0) Stream added, broadcasting: 3\nI0315 20:17:48.518532 60 log.go:172] (0xc00014c840) Reply frame received for 3\nI0315 20:17:48.518569 60 log.go:172] (0xc00014c840) (0xc0007fa6e0) Create stream\nI0315 20:17:48.518581 60 log.go:172] (0xc00014c840) (0xc0007fa6e0) Stream added, broadcasting: 5\nI0315 20:17:48.519487 60 log.go:172] (0xc00014c840) Reply frame received for 5\nI0315 20:17:48.624234 60 log.go:172] (0xc00014c840) Data frame received for 5\nI0315 20:17:48.624265 60 log.go:172] (0xc0007fa6e0) (5) Data frame handling\nI0315 20:17:48.624281 60 log.go:172] (0xc00014c840) Data frame received for 3\nI0315 20:17:48.624287 60 log.go:172] (0xc000710be0) (3) Data frame handling\nI0315 20:17:48.624293 60 log.go:172] (0xc000710be0) (3) Data frame sent\nI0315 20:17:48.624298 60 log.go:172] (0xc00014c840) Data frame received for 3\nI0315 20:17:48.624303 60 log.go:172] (0xc000710be0) (3) Data frame handling\nI0315 20:17:48.625884 60 log.go:172] (0xc00014c840) Data frame received for 1\nI0315 20:17:48.625910 60 log.go:172] (0xc0007fa640) (1) Data frame handling\nI0315 20:17:48.625944 60 log.go:172] (0xc0007fa640) (1) Data frame sent\nI0315 20:17:48.625999 60 log.go:172] (0xc00014c840) (0xc0007fa640) Stream removed, broadcasting: 1\nI0315 20:17:48.626039 60 log.go:172] (0xc00014c840) Go away received\nI0315 20:17:48.626217 60 log.go:172] (0xc00014c840) (0xc0007fa640) Stream removed, broadcasting: 1\nI0315 20:17:48.626229 60 log.go:172] (0xc00014c840) (0xc000710be0) Stream removed, broadcasting: 3\nI0315 20:17:48.626235 60 log.go:172] (0xc00014c840) (0xc0007fa6e0) Stream removed, broadcasting: 5\n" Mar 15 20:17:48.628: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 20:17:48.628: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 20:17:58.650: INFO: Waiting for StatefulSet e2e-tests-statefulset-wvchd/ss2 to complete update Mar 15 20:17:58.650: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 20:17:58.650: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 20:17:58.650: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 20:18:08.768: INFO: Waiting for StatefulSet e2e-tests-statefulset-wvchd/ss2 to complete update Mar 15 20:18:08.769: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 20:18:08.769: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 20:18:18.658: INFO: Waiting for StatefulSet e2e-tests-statefulset-wvchd/ss2 to complete update Mar 15 20:18:18.658: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 20:18:18.658: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 20:18:28.658: INFO: Waiting for StatefulSet e2e-tests-statefulset-wvchd/ss2 to complete update Mar 15 20:18:28.658: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 20:18:38.659: INFO: Waiting for StatefulSet e2e-tests-statefulset-wvchd/ss2 to complete update STEP: Rolling back to a previous revision Mar 15 20:18:48.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wvchd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:18:49.182: INFO: stderr: "I0315 20:18:48.895606 82 log.go:172] (0xc00014c790) (0xc00070e640) Create stream\nI0315 20:18:48.895671 82 log.go:172] (0xc00014c790) (0xc00070e640) Stream added, broadcasting: 1\nI0315 20:18:48.897908 82 log.go:172] (0xc00014c790) Reply frame received for 1\nI0315 20:18:48.897950 82 log.go:172] (0xc00014c790) (0xc0005cadc0) Create stream\nI0315 20:18:48.897971 82 log.go:172] (0xc00014c790) (0xc0005cadc0) Stream added, broadcasting: 3\nI0315 20:18:48.899022 82 log.go:172] (0xc00014c790) Reply frame received for 3\nI0315 20:18:48.899073 82 log.go:172] (0xc00014c790) (0xc0005a8000) Create stream\nI0315 20:18:48.899089 82 log.go:172] (0xc00014c790) (0xc0005a8000) Stream added, broadcasting: 5\nI0315 20:18:48.899816 82 log.go:172] (0xc00014c790) Reply frame received for 5\nI0315 20:18:49.176030 82 log.go:172] (0xc00014c790) Data frame received for 3\nI0315 20:18:49.176144 82 log.go:172] (0xc0005cadc0) (3) Data frame handling\nI0315 20:18:49.176162 82 log.go:172] (0xc0005cadc0) (3) Data frame sent\nI0315 20:18:49.176367 82 log.go:172] (0xc00014c790) Data frame received for 3\nI0315 20:18:49.176394 82 log.go:172] (0xc0005cadc0) (3) Data frame handling\nI0315 20:18:49.176489 82 log.go:172] (0xc00014c790) Data frame received for 5\nI0315 20:18:49.176500 82 log.go:172] (0xc0005a8000) (5) Data frame handling\nI0315 20:18:49.178413 82 log.go:172] (0xc00014c790) Data frame received for 1\nI0315 20:18:49.178427 82 log.go:172] (0xc00070e640) (1) Data frame handling\nI0315 20:18:49.178433 82 log.go:172] (0xc00070e640) (1) Data frame sent\nI0315 20:18:49.178445 82 log.go:172] (0xc00014c790) (0xc00070e640) Stream removed, broadcasting: 1\nI0315 20:18:49.178467 82 log.go:172] (0xc00014c790) Go away received\nI0315 20:18:49.178728 82 log.go:172] (0xc00014c790) (0xc00070e640) Stream removed, broadcasting: 1\nI0315 20:18:49.178769 82 log.go:172] (0xc00014c790) (0xc0005cadc0) Stream removed, broadcasting: 3\nI0315 20:18:49.178801 82 log.go:172] (0xc00014c790) (0xc0005a8000) Stream removed, broadcasting: 5\n" Mar 15 20:18:49.182: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:18:49.182: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 20:18:59.211: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 15 20:19:09.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wvchd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 20:19:09.438: INFO: stderr: "I0315 20:19:09.369780 104 log.go:172] (0xc00014c790) (0xc00065f4a0) Create stream\nI0315 20:19:09.369839 104 log.go:172] (0xc00014c790) (0xc00065f4a0) Stream added, broadcasting: 1\nI0315 20:19:09.371365 104 log.go:172] (0xc00014c790) Reply frame received for 1\nI0315 20:19:09.371391 104 log.go:172] (0xc00014c790) (0xc00065f540) Create stream\nI0315 20:19:09.371398 104 log.go:172] (0xc00014c790) (0xc00065f540) Stream added, broadcasting: 3\nI0315 20:19:09.372077 104 log.go:172] (0xc00014c790) Reply frame received for 3\nI0315 20:19:09.372112 104 log.go:172] (0xc00014c790) (0xc00065f5e0) Create stream\nI0315 20:19:09.372126 104 log.go:172] (0xc00014c790) (0xc00065f5e0) Stream added, broadcasting: 5\nI0315 20:19:09.372966 104 log.go:172] (0xc00014c790) Reply frame received for 5\nI0315 20:19:09.435044 104 log.go:172] (0xc00014c790) Data frame received for 5\nI0315 20:19:09.435094 104 log.go:172] (0xc00065f5e0) (5) Data frame handling\nI0315 20:19:09.435121 104 log.go:172] (0xc00014c790) Data frame received for 3\nI0315 20:19:09.435131 104 log.go:172] (0xc00065f540) (3) Data frame handling\nI0315 20:19:09.435140 104 log.go:172] (0xc00065f540) (3) Data frame sent\nI0315 20:19:09.435148 104 log.go:172] (0xc00014c790) Data frame received for 3\nI0315 20:19:09.435155 104 log.go:172] (0xc00065f540) (3) Data frame handling\nI0315 20:19:09.436469 104 log.go:172] (0xc00014c790) Data frame received for 1\nI0315 20:19:09.436493 104 log.go:172] (0xc00065f4a0) (1) Data frame handling\nI0315 20:19:09.436501 104 log.go:172] (0xc00065f4a0) (1) Data frame sent\nI0315 20:19:09.436509 104 log.go:172] (0xc00014c790) (0xc00065f4a0) Stream removed, broadcasting: 1\nI0315 20:19:09.436521 104 log.go:172] (0xc00014c790) Go away received\nI0315 20:19:09.436672 104 log.go:172] (0xc00014c790) (0xc00065f4a0) Stream removed, broadcasting: 1\nI0315 20:19:09.436688 104 log.go:172] (0xc00014c790) (0xc00065f540) Stream removed, broadcasting: 3\nI0315 20:19:09.436697 104 log.go:172] (0xc00014c790) (0xc00065f5e0) Stream removed, broadcasting: 5\n" Mar 15 20:19:09.438: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 20:19:09.438: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 20:19:19.766: INFO: Waiting for StatefulSet e2e-tests-statefulset-wvchd/ss2 to complete update Mar 15 20:19:19.766: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 15 20:19:19.766: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 15 20:19:29.775: INFO: Waiting for StatefulSet e2e-tests-statefulset-wvchd/ss2 to complete update Mar 15 20:19:29.775: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 15 20:19:39.774: INFO: Waiting for StatefulSet e2e-tests-statefulset-wvchd/ss2 to complete update Mar 15 20:19:39.774: INFO: Waiting for Pod e2e-tests-statefulset-wvchd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 15 20:19:49.774: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wvchd Mar 15 20:19:49.776: INFO: Scaling statefulset ss2 to 0 Mar 15 20:20:19.813: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 20:20:19.816: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:20:19.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-wvchd" for this suite. Mar 15 20:20:27.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:20:27.895: INFO: namespace: e2e-tests-statefulset-wvchd, resource: bindings, ignored listing per whitelist Mar 15 20:20:27.951: INFO: namespace e2e-tests-statefulset-wvchd deletion completed in 8.11667995s • [SLOW TEST:211.145 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:20:27.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 20:20:28.078: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 15 20:20:28.090: INFO: Number of nodes with available pods: 0 Mar 15 20:20:28.090: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 15 20:20:28.155: INFO: Number of nodes with available pods: 0 Mar 15 20:20:28.155: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:29.159: INFO: Number of nodes with available pods: 0 Mar 15 20:20:29.159: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:30.159: INFO: Number of nodes with available pods: 0 Mar 15 20:20:30.159: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:31.159: INFO: Number of nodes with available pods: 1 Mar 15 20:20:31.159: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 15 20:20:31.187: INFO: Number of nodes with available pods: 1 Mar 15 20:20:31.187: INFO: Number of running nodes: 0, number of available pods: 1 Mar 15 20:20:32.191: INFO: Number of nodes with available pods: 0 Mar 15 20:20:32.191: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 15 20:20:32.240: INFO: Number of nodes with available pods: 0 Mar 15 20:20:32.240: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:33.264: INFO: Number of nodes with available pods: 0 Mar 15 20:20:33.264: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:34.244: INFO: Number of nodes with available pods: 0 Mar 15 20:20:34.244: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:35.244: INFO: Number of nodes with available pods: 0 Mar 15 20:20:35.244: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:36.244: INFO: Number of nodes with available pods: 0 Mar 15 20:20:36.244: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:37.324: INFO: Number of nodes with available pods: 0 Mar 15 20:20:37.324: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:38.244: INFO: Number of nodes with available pods: 0 Mar 15 20:20:38.244: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:39.426: INFO: Number of nodes with available pods: 0 Mar 15 20:20:39.426: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:40.243: INFO: Number of nodes with available pods: 0 Mar 15 20:20:40.243: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:41.246: INFO: Number of nodes with available pods: 0 Mar 15 20:20:41.246: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:42.243: INFO: Number of nodes with available pods: 0 Mar 15 20:20:42.243: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:43.286: INFO: Number of nodes with available pods: 0 Mar 15 20:20:43.286: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:44.244: INFO: Number of nodes with available pods: 0 Mar 15 20:20:44.244: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:45.285: INFO: Number of nodes with available pods: 0 Mar 15 20:20:45.285: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:46.244: INFO: Number of nodes with available pods: 0 Mar 15 20:20:46.244: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:20:47.576: INFO: Number of nodes with available pods: 1 Mar 15 20:20:47.576: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gc2rx, will wait for the garbage collector to delete the pods Mar 15 20:20:47.776: INFO: Deleting DaemonSet.extensions daemon-set took: 78.273831ms Mar 15 20:20:48.077: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.257444ms Mar 15 20:20:51.980: INFO: Number of nodes with available pods: 0 Mar 15 20:20:51.980: INFO: Number of running nodes: 0, number of available pods: 0 Mar 15 20:20:51.986: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gc2rx/daemonsets","resourceVersion":"14473"},"items":null} Mar 15 20:20:51.989: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gc2rx/pods","resourceVersion":"14473"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:20:52.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-gc2rx" for this suite. Mar 15 20:20:58.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:20:58.091: INFO: namespace: e2e-tests-daemonsets-gc2rx, resource: bindings, ignored listing per whitelist Mar 15 20:20:58.111: INFO: namespace e2e-tests-daemonsets-gc2rx deletion completed in 6.090418795s • [SLOW TEST:30.160 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:20:58.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 15 20:20:58.751: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:21:08.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-jt7jp" for this suite. Mar 15 20:21:14.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:21:15.014: INFO: namespace: e2e-tests-init-container-jt7jp, resource: bindings, ignored listing per whitelist Mar 15 20:21:15.055: INFO: namespace e2e-tests-init-container-jt7jp deletion completed in 6.096321876s • [SLOW TEST:16.944 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:21:15.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 20:21:15.169: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 15 20:21:20.174: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 15 20:21:20.174: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 15 20:21:22.178: INFO: Creating deployment "test-rollover-deployment" Mar 15 20:21:22.190: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 15 20:21:24.283: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 15 20:21:24.289: INFO: Ensure that both replica sets have 1 created replica Mar 15 20:21:24.293: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 15 20:21:24.299: INFO: Updating deployment test-rollover-deployment Mar 15 20:21:24.299: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 15 20:21:26.343: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 15 20:21:26.349: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 15 20:21:26.354: INFO: all replica sets need to contain the pod-template-hash label Mar 15 20:21:26.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900485, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 20:21:28.472: INFO: all replica sets need to contain the pod-template-hash label Mar 15 20:21:28.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900485, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 20:21:31.027: INFO: all replica sets need to contain the pod-template-hash label Mar 15 20:21:31.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900485, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 20:21:32.664: INFO: all replica sets need to contain the pod-template-hash label Mar 15 20:21:32.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900491, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 20:21:34.364: INFO: all replica sets need to contain the pod-template-hash label Mar 15 20:21:34.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900491, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 20:21:36.362: INFO: all replica sets need to contain the pod-template-hash label Mar 15 20:21:36.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900491, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 20:21:38.361: INFO: all replica sets need to contain the pod-template-hash label Mar 15 20:21:38.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900491, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 20:21:40.362: INFO: all replica sets need to contain the pod-template-hash label Mar 15 20:21:40.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900491, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 20:21:42.361: INFO: Mar 15 20:21:42.361: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 15 20:21:42.369: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-6qs29,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6qs29/deployments/test-rollover-deployment,UID:896e7b1a-66fa-11ea-99e8-0242ac110002,ResourceVersion:14700,Generation:2,CreationTimestamp:2020-03-15 20:21:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-15 20:21:22 +0000 UTC 2020-03-15 20:21:22 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-15 20:21:41 +0000 UTC 2020-03-15 20:21:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 15 20:21:42.373: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-6qs29,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6qs29/replicasets/test-rollover-deployment-5b8479fdb6,UID:8ab22346-66fa-11ea-99e8-0242ac110002,ResourceVersion:14691,Generation:2,CreationTimestamp:2020-03-15 20:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 896e7b1a-66fa-11ea-99e8-0242ac110002 0xc001436c97 0xc001436c98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 15 20:21:42.373: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 15 20:21:42.373: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-6qs29,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6qs29/replicasets/test-rollover-controller,UID:853cfde6-66fa-11ea-99e8-0242ac110002,ResourceVersion:14699,Generation:2,CreationTimestamp:2020-03-15 20:21:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 896e7b1a-66fa-11ea-99e8-0242ac110002 0xc001436b07 0xc001436b08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 20:21:42.374: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-6qs29,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6qs29/replicasets/test-rollover-deployment-58494b7559,UID:8971d964-66fa-11ea-99e8-0242ac110002,ResourceVersion:14653,Generation:2,CreationTimestamp:2020-03-15 20:21:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 896e7b1a-66fa-11ea-99e8-0242ac110002 0xc001436bc7 0xc001436bc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 20:21:42.377: INFO: Pod "test-rollover-deployment-5b8479fdb6-992rc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-992rc,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-6qs29,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6qs29/pods/test-rollover-deployment-5b8479fdb6-992rc,UID:8af85935-66fa-11ea-99e8-0242ac110002,ResourceVersion:14669,Generation:0,CreationTimestamp:2020-03-15 20:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 8ab22346-66fa-11ea-99e8-0242ac110002 0xc001437837 0xc001437838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p45d8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p45d8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-p45d8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014378b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014378d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:21:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:21:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:21:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:21:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.12,StartTime:2020-03-15 20:21:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-15 20:21:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://610803932b790f2c26478e402b1466684971aa01d967bd3504716771e1a84214}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:21:42.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6qs29" for this suite. Mar 15 20:21:50.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:21:50.418: INFO: namespace: e2e-tests-deployment-6qs29, resource: bindings, ignored listing per whitelist Mar 15 20:21:50.539: INFO: namespace e2e-tests-deployment-6qs29 deletion completed in 8.158790086s • [SLOW TEST:35.483 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:21:50.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 15 20:21:50.687: INFO: Waiting up to 5m0s for pod "pod-9a6345be-66fa-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-54twq" to be "success or failure" Mar 15 20:21:50.699: INFO: Pod "pod-9a6345be-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 12.078896ms Mar 15 20:21:52.709: INFO: Pod "pod-9a6345be-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022268682s Mar 15 20:21:54.714: INFO: Pod "pod-9a6345be-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02646104s Mar 15 20:21:56.718: INFO: Pod "pod-9a6345be-66fa-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030785909s STEP: Saw pod success Mar 15 20:21:56.718: INFO: Pod "pod-9a6345be-66fa-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:21:56.723: INFO: Trying to get logs from node hunter-worker2 pod pod-9a6345be-66fa-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:21:56.772: INFO: Waiting for pod pod-9a6345be-66fa-11ea-9ccf-0242ac110012 to disappear Mar 15 20:21:56.816: INFO: Pod pod-9a6345be-66fa-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:21:56.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-54twq" for this suite. Mar 15 20:22:02.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:22:02.844: INFO: namespace: e2e-tests-emptydir-54twq, resource: bindings, ignored listing per whitelist Mar 15 20:22:02.917: INFO: namespace e2e-tests-emptydir-54twq deletion completed in 6.098729329s • [SLOW TEST:12.378 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:22:02.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-a1c6b2d1-66fa-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 20:22:03.046: INFO: Waiting up to 5m0s for pod "pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012" in namespace "e2e-tests-secrets-l8dhb" to be "success or failure" Mar 15 20:22:03.050: INFO: Pod "pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371922ms Mar 15 20:22:05.055: INFO: Pod "pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008829581s Mar 15 20:22:07.058: INFO: Pod "pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012359355s Mar 15 20:22:09.062: INFO: Pod "pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 6.016477322s Mar 15 20:22:11.067: INFO: Pod "pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020626077s STEP: Saw pod success Mar 15 20:22:11.067: INFO: Pod "pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:22:11.070: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012 container secret-env-test: STEP: delete the pod Mar 15 20:22:11.130: INFO: Waiting for pod pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012 to disappear Mar 15 20:22:11.146: INFO: Pod pod-secrets-a1c9171f-66fa-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:22:11.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-l8dhb" for this suite. Mar 15 20:22:17.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:22:17.282: INFO: namespace: e2e-tests-secrets-l8dhb, resource: bindings, ignored listing per whitelist Mar 15 20:22:17.287: INFO: namespace e2e-tests-secrets-l8dhb deletion completed in 6.137805793s • [SLOW TEST:14.369 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:22:17.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 20:22:17.391: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa53e087-66fa-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-km8pd" to be "success or failure" Mar 15 20:22:17.398: INFO: Pod "downwardapi-volume-aa53e087-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 6.676599ms Mar 15 20:22:19.403: INFO: Pod "downwardapi-volume-aa53e087-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011112903s Mar 15 20:22:21.407: INFO: Pod "downwardapi-volume-aa53e087-66fa-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015019864s STEP: Saw pod success Mar 15 20:22:21.407: INFO: Pod "downwardapi-volume-aa53e087-66fa-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:22:21.409: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-aa53e087-66fa-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 20:22:21.438: INFO: Waiting for pod downwardapi-volume-aa53e087-66fa-11ea-9ccf-0242ac110012 to disappear Mar 15 20:22:21.448: INFO: Pod downwardapi-volume-aa53e087-66fa-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:22:21.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-km8pd" for this suite. Mar 15 20:22:27.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:22:27.500: INFO: namespace: e2e-tests-downward-api-km8pd, resource: bindings, ignored listing per whitelist Mar 15 20:22:27.536: INFO: namespace e2e-tests-downward-api-km8pd deletion completed in 6.084490835s • [SLOW TEST:10.249 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:22:27.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 15 20:22:27.620: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 15 20:22:27.638: INFO: Waiting for terminating namespaces to be deleted... Mar 15 20:22:27.640: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 15 20:22:27.648: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 15 20:22:27.648: INFO: Container kube-proxy ready: true, restart count 0 Mar 15 20:22:27.648: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 20:22:27.648: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 20:22:27.648: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 20:22:27.648: INFO: Container coredns ready: true, restart count 0 Mar 15 20:22:27.648: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 15 20:22:27.652: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 20:22:27.652: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 20:22:27.652: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 20:22:27.652: INFO: Container kube-proxy ready: true, restart count 0 Mar 15 20:22:27.652: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 20:22:27.652: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fc93dd4ff9e2a2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:22:28.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-46wcm" for this suite. Mar 15 20:22:34.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:22:34.767: INFO: namespace: e2e-tests-sched-pred-46wcm, resource: bindings, ignored listing per whitelist Mar 15 20:22:34.810: INFO: namespace e2e-tests-sched-pred-46wcm deletion completed in 6.129819759s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.274 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:22:34.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-b4c5ba98-66fa-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 20:22:34.906: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b4c67575-66fa-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-grjqj" to be "success or failure" Mar 15 20:22:34.919: INFO: Pod "pod-projected-secrets-b4c67575-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 12.375918ms Mar 15 20:22:36.921: INFO: Pod "pod-projected-secrets-b4c67575-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015183807s Mar 15 20:22:38.967: INFO: Pod "pod-projected-secrets-b4c67575-66fa-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060608665s STEP: Saw pod success Mar 15 20:22:38.967: INFO: Pod "pod-projected-secrets-b4c67575-66fa-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:22:38.970: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b4c67575-66fa-11ea-9ccf-0242ac110012 container projected-secret-volume-test: STEP: delete the pod Mar 15 20:22:39.328: INFO: Waiting for pod pod-projected-secrets-b4c67575-66fa-11ea-9ccf-0242ac110012 to disappear Mar 15 20:22:39.475: INFO: Pod pod-projected-secrets-b4c67575-66fa-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:22:39.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-grjqj" for this suite. Mar 15 20:22:45.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:22:45.631: INFO: namespace: e2e-tests-projected-grjqj, resource: bindings, ignored listing per whitelist Mar 15 20:22:45.674: INFO: namespace e2e-tests-projected-grjqj deletion completed in 6.194717713s • [SLOW TEST:10.863 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:22:45.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 20:22:45.891: INFO: Creating ReplicaSet my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012 Mar 15 20:22:45.953: INFO: Pod name my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012: Found 0 pods out of 1 Mar 15 20:22:51.296: INFO: Pod name my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012: Found 1 pods out of 1 Mar 15 20:22:51.296: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012" is running Mar 15 20:22:53.302: INFO: Pod "my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012-hk2xk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 20:22:46 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 20:22:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 20:22:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 20:22:45 +0000 UTC Reason: Message:}]) Mar 15 20:22:53.302: INFO: Trying to dial the pod Mar 15 20:22:58.312: INFO: Controller my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012: Got expected result from replica 1 [my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012-hk2xk]: "my-hostname-basic-bb53ead4-66fa-11ea-9ccf-0242ac110012-hk2xk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:22:58.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-zx98d" for this suite. Mar 15 20:23:04.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:23:04.385: INFO: namespace: e2e-tests-replicaset-zx98d, resource: bindings, ignored listing per whitelist Mar 15 20:23:04.418: INFO: namespace e2e-tests-replicaset-zx98d deletion completed in 6.102103662s • [SLOW TEST:18.744 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:23:04.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 15 20:23:04.536: INFO: Waiting up to 5m0s for pod "downward-api-c66d9bfb-66fa-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-8dznp" to be "success or failure" Mar 15 20:23:04.540: INFO: Pod "downward-api-c66d9bfb-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.799611ms Mar 15 20:23:06.544: INFO: Pod "downward-api-c66d9bfb-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008004142s Mar 15 20:23:08.547: INFO: Pod "downward-api-c66d9bfb-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011613048s Mar 15 20:23:10.551: INFO: Pod "downward-api-c66d9bfb-66fa-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015070268s STEP: Saw pod success Mar 15 20:23:10.551: INFO: Pod "downward-api-c66d9bfb-66fa-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:23:10.554: INFO: Trying to get logs from node hunter-worker2 pod downward-api-c66d9bfb-66fa-11ea-9ccf-0242ac110012 container dapi-container: STEP: delete the pod Mar 15 20:23:10.627: INFO: Waiting for pod downward-api-c66d9bfb-66fa-11ea-9ccf-0242ac110012 to disappear Mar 15 20:23:10.635: INFO: Pod downward-api-c66d9bfb-66fa-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:23:10.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8dznp" for this suite. Mar 15 20:23:16.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:23:16.688: INFO: namespace: e2e-tests-downward-api-8dznp, resource: bindings, ignored listing per whitelist Mar 15 20:23:16.737: INFO: namespace e2e-tests-downward-api-8dznp deletion completed in 6.098826925s • [SLOW TEST:12.319 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:23:16.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Mar 15 20:23:16.927: INFO: Waiting up to 5m0s for pod "client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012" in namespace "e2e-tests-containers-l44jb" to be "success or failure" Mar 15 20:23:16.944: INFO: Pod "client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 16.95526ms Mar 15 20:23:18.948: INFO: Pod "client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020476107s Mar 15 20:23:20.952: INFO: Pod "client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024447791s Mar 15 20:23:22.955: INFO: Pod "client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028194997s Mar 15 20:23:24.960: INFO: Pod "client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.032375526s STEP: Saw pod success Mar 15 20:23:24.960: INFO: Pod "client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:23:24.963: INFO: Trying to get logs from node hunter-worker pod client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:23:24.982: INFO: Waiting for pod client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012 to disappear Mar 15 20:23:24.986: INFO: Pod client-containers-cdce4e79-66fa-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:23:24.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-l44jb" for this suite. Mar 15 20:23:31.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:23:31.032: INFO: namespace: e2e-tests-containers-l44jb, resource: bindings, ignored listing per whitelist Mar 15 20:23:31.068: INFO: namespace e2e-tests-containers-l44jb deletion completed in 6.078904299s • [SLOW TEST:14.330 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:23:31.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 15 20:23:31.216: INFO: Waiting up to 5m0s for pod "pod-d65739da-66fa-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-rbn2m" to be "success or failure" Mar 15 20:23:31.237: INFO: Pod "pod-d65739da-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 20.246387ms Mar 15 20:23:33.240: INFO: Pod "pod-d65739da-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02411783s Mar 15 20:23:35.245: INFO: Pod "pod-d65739da-66fa-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028245266s Mar 15 20:23:37.788: INFO: Pod "pod-d65739da-66fa-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.571804714s STEP: Saw pod success Mar 15 20:23:37.788: INFO: Pod "pod-d65739da-66fa-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:23:37.798: INFO: Trying to get logs from node hunter-worker2 pod pod-d65739da-66fa-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:23:38.005: INFO: Waiting for pod pod-d65739da-66fa-11ea-9ccf-0242ac110012 to disappear Mar 15 20:23:38.031: INFO: Pod pod-d65739da-66fa-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:23:38.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rbn2m" for this suite. Mar 15 20:23:44.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:23:44.129: INFO: namespace: e2e-tests-emptydir-rbn2m, resource: bindings, ignored listing per whitelist Mar 15 20:23:44.160: INFO: namespace e2e-tests-emptydir-rbn2m deletion completed in 6.124927549s • [SLOW TEST:13.092 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:23:44.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-s8jn6 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-s8jn6 STEP: Deleting pre-stop pod Mar 15 20:24:03.664: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:24:03.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-s8jn6" for this suite. Mar 15 20:24:41.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:24:41.761: INFO: namespace: e2e-tests-prestop-s8jn6, resource: bindings, ignored listing per whitelist Mar 15 20:24:41.812: INFO: namespace e2e-tests-prestop-s8jn6 deletion completed in 38.092228941s • [SLOW TEST:57.652 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:24:41.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 15 20:24:43.324: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 20:24:43.327: INFO: Number of nodes with available pods: 0 Mar 15 20:24:43.327: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:24:44.332: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 20:24:44.335: INFO: Number of nodes with available pods: 0 Mar 15 20:24:44.335: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:24:45.364: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 20:24:45.367: INFO: Number of nodes with available pods: 0 Mar 15 20:24:45.367: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:24:46.346: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 20:24:46.348: INFO: Number of nodes with available pods: 0 Mar 15 20:24:46.348: INFO: Node hunter-worker is running more than one daemon pod Mar 15 20:24:47.351: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 20:24:47.354: INFO: Number of nodes with available pods: 2 Mar 15 20:24:47.354: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 15 20:24:47.366: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 20:24:47.371: INFO: Number of nodes with available pods: 2 Mar 15 20:24:47.371: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4575p, will wait for the garbage collector to delete the pods Mar 15 20:24:48.515: INFO: Deleting DaemonSet.extensions daemon-set took: 5.740293ms Mar 15 20:24:48.815: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.320959ms Mar 15 20:24:52.418: INFO: Number of nodes with available pods: 0 Mar 15 20:24:52.418: INFO: Number of running nodes: 0, number of available pods: 0 Mar 15 20:24:52.421: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4575p/daemonsets","resourceVersion":"15434"},"items":null} Mar 15 20:24:52.424: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4575p/pods","resourceVersion":"15434"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:24:52.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4575p" for this suite. Mar 15 20:24:58.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:24:58.468: INFO: namespace: e2e-tests-daemonsets-4575p, resource: bindings, ignored listing per whitelist Mar 15 20:24:58.532: INFO: namespace e2e-tests-daemonsets-4575p deletion completed in 6.097684696s • [SLOW TEST:16.720 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:24:58.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-jmmfw/configmap-test-0a715e6a-66fb-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 20:24:58.657: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a756917-66fb-11ea-9ccf-0242ac110012" in namespace "e2e-tests-configmap-jmmfw" to be "success or failure" Mar 15 20:24:58.661: INFO: Pod "pod-configmaps-0a756917-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364901ms Mar 15 20:25:00.665: INFO: Pod "pod-configmaps-0a756917-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007814075s Mar 15 20:25:02.668: INFO: Pod "pod-configmaps-0a756917-66fb-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011734806s STEP: Saw pod success Mar 15 20:25:02.669: INFO: Pod "pod-configmaps-0a756917-66fb-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:25:02.671: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0a756917-66fb-11ea-9ccf-0242ac110012 container env-test: STEP: delete the pod Mar 15 20:25:02.703: INFO: Waiting for pod pod-configmaps-0a756917-66fb-11ea-9ccf-0242ac110012 to disappear Mar 15 20:25:02.728: INFO: Pod pod-configmaps-0a756917-66fb-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:25:02.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jmmfw" for this suite. Mar 15 20:25:08.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:25:08.788: INFO: namespace: e2e-tests-configmap-jmmfw, resource: bindings, ignored listing per whitelist Mar 15 20:25:08.822: INFO: namespace e2e-tests-configmap-jmmfw deletion completed in 6.090235332s • [SLOW TEST:10.290 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:25:08.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 20:25:08.914: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 15 20:25:08.941: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 15 20:25:13.945: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 15 20:25:13.945: INFO: Creating deployment "test-rolling-update-deployment" Mar 15 20:25:13.949: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 15 20:25:13.968: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 15 20:25:15.976: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 15 20:25:15.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900714, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900714, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900714, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719900713, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 20:25:17.998: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 15 20:25:18.007: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-mgst5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mgst5/deployments/test-rolling-update-deployment,UID:13935444-66fb-11ea-99e8-0242ac110002,ResourceVersion:15578,Generation:1,CreationTimestamp:2020-03-15 20:25:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-15 20:25:14 +0000 UTC 2020-03-15 20:25:14 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-15 20:25:17 +0000 UTC 2020-03-15 20:25:13 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 15 20:25:18.010: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-mgst5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mgst5/replicasets/test-rolling-update-deployment-75db98fb4c,UID:1397655c-66fb-11ea-99e8-0242ac110002,ResourceVersion:15569,Generation:1,CreationTimestamp:2020-03-15 20:25:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 13935444-66fb-11ea-99e8-0242ac110002 0xc001e63577 0xc001e63578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 15 20:25:18.010: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 15 20:25:18.010: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-mgst5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mgst5/replicasets/test-rolling-update-controller,UID:10939a54-66fb-11ea-99e8-0242ac110002,ResourceVersion:15577,Generation:2,CreationTimestamp:2020-03-15 20:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 13935444-66fb-11ea-99e8-0242ac110002 0xc001e634b7 0xc001e634b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 20:25:18.013: INFO: Pod "test-rolling-update-deployment-75db98fb4c-9lfbn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-9lfbn,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-mgst5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mgst5/pods/test-rolling-update-deployment-75db98fb4c-9lfbn,UID:13982b43-66fb-11ea-99e8-0242ac110002,ResourceVersion:15568,Generation:0,CreationTimestamp:2020-03-15 20:25:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 1397655c-66fb-11ea-99e8-0242ac110002 0xc001e63e57 0xc001e63e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2bh22 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2bh22,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-2bh22 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e63ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e63ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:25:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:25:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:25:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:25:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.23,StartTime:2020-03-15 20:25:14 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-15 20:25:16 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://5751b4a7e596926bb41bf27a412e4473cabc2a6b160978c39d8d0f87b3ba344e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:25:18.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-mgst5" for this suite. Mar 15 20:25:24.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:25:24.083: INFO: namespace: e2e-tests-deployment-mgst5, resource: bindings, ignored listing per whitelist Mar 15 20:25:24.126: INFO: namespace e2e-tests-deployment-mgst5 deletion completed in 6.110342221s • [SLOW TEST:15.304 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:25:24.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 15 20:25:24.236: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 15 20:25:24.245: INFO: Waiting for terminating namespaces to be deleted... Mar 15 20:25:24.248: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 15 20:25:24.254: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 15 20:25:24.254: INFO: Container kube-proxy ready: true, restart count 0 Mar 15 20:25:24.254: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 20:25:24.254: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 20:25:24.254: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 20:25:24.254: INFO: Container coredns ready: true, restart count 0 Mar 15 20:25:24.254: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 15 20:25:24.258: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 20:25:24.258: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 20:25:24.258: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 20:25:24.258: INFO: Container kube-proxy ready: true, restart count 0 Mar 15 20:25:24.258: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 20:25:24.258: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1c204910-66fb-11ea-9ccf-0242ac110012 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-1c204910-66fb-11ea-9ccf-0242ac110012 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-1c204910-66fb-11ea-9ccf-0242ac110012 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:25:32.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-dzmpm" for this suite. Mar 15 20:25:54.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:25:54.778: INFO: namespace: e2e-tests-sched-pred-dzmpm, resource: bindings, ignored listing per whitelist Mar 15 20:25:55.450: INFO: namespace e2e-tests-sched-pred-dzmpm deletion completed in 23.060272536s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:31.323 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:25:55.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 20:25:55.593: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.118758ms) Mar 15 20:25:55.596: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.312652ms) Mar 15 20:25:55.599: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.099391ms) Mar 15 20:25:55.602: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.78306ms) Mar 15 20:25:55.605: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.147106ms) Mar 15 20:25:55.608: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.62491ms) Mar 15 20:25:55.611: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.897113ms) Mar 15 20:25:55.613: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.742982ms) Mar 15 20:25:55.616: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.885581ms) Mar 15 20:25:55.619: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.050162ms) Mar 15 20:25:55.623: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.212615ms) Mar 15 20:25:55.626: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.942879ms) Mar 15 20:25:55.628: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.578952ms) Mar 15 20:25:55.631: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.502369ms) Mar 15 20:25:55.634: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.89299ms) Mar 15 20:25:55.637: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.411506ms) Mar 15 20:25:55.640: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.048595ms) Mar 15 20:25:55.643: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.001011ms) Mar 15 20:25:55.646: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.695269ms) Mar 15 20:25:55.649: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.751087ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:25:55.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-f5v42" for this suite. Mar 15 20:26:01.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:26:01.690: INFO: namespace: e2e-tests-proxy-f5v42, resource: bindings, ignored listing per whitelist Mar 15 20:26:01.742: INFO: namespace e2e-tests-proxy-f5v42 deletion completed in 6.090278855s • [SLOW TEST:6.292 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:26:01.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 20:26:01.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-8lm2v' Mar 15 20:26:04.753: INFO: stderr: "" Mar 15 20:26:04.753: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Mar 15 20:26:04.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-8lm2v' Mar 15 20:26:10.483: INFO: stderr: "" Mar 15 20:26:10.483: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:26:10.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8lm2v" for this suite. Mar 15 20:26:16.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:26:16.518: INFO: namespace: e2e-tests-kubectl-8lm2v, resource: bindings, ignored listing per whitelist Mar 15 20:26:16.571: INFO: namespace e2e-tests-kubectl-8lm2v deletion completed in 6.084599215s • [SLOW TEST:14.829 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:26:16.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Mar 15 20:26:16.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8zvlb' Mar 15 20:26:17.071: INFO: stderr: "" Mar 15 20:26:17.071: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Mar 15 20:26:18.076: INFO: Selector matched 1 pods for map[app:redis] Mar 15 20:26:18.076: INFO: Found 0 / 1 Mar 15 20:26:19.075: INFO: Selector matched 1 pods for map[app:redis] Mar 15 20:26:19.075: INFO: Found 0 / 1 Mar 15 20:26:20.095: INFO: Selector matched 1 pods for map[app:redis] Mar 15 20:26:20.095: INFO: Found 0 / 1 Mar 15 20:26:21.076: INFO: Selector matched 1 pods for map[app:redis] Mar 15 20:26:21.076: INFO: Found 1 / 1 Mar 15 20:26:21.076: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 15 20:26:21.079: INFO: Selector matched 1 pods for map[app:redis] Mar 15 20:26:21.079: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 15 20:26:21.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5rdqz redis-master --namespace=e2e-tests-kubectl-8zvlb' Mar 15 20:26:21.194: INFO: stderr: "" Mar 15 20:26:21.194: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Mar 20:26:20.170 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Mar 20:26:20.170 # Server started, Redis version 3.2.12\n1:M 15 Mar 20:26:20.170 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Mar 20:26:20.170 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 15 20:26:21.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5rdqz redis-master --namespace=e2e-tests-kubectl-8zvlb --tail=1' Mar 15 20:26:21.294: INFO: stderr: "" Mar 15 20:26:21.294: INFO: stdout: "1:M 15 Mar 20:26:20.170 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 15 20:26:21.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5rdqz redis-master --namespace=e2e-tests-kubectl-8zvlb --limit-bytes=1' Mar 15 20:26:21.394: INFO: stderr: "" Mar 15 20:26:21.394: INFO: stdout: " " STEP: exposing timestamps Mar 15 20:26:21.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5rdqz redis-master --namespace=e2e-tests-kubectl-8zvlb --tail=1 --timestamps' Mar 15 20:26:21.515: INFO: stderr: "" Mar 15 20:26:21.515: INFO: stdout: "2020-03-15T20:26:20.170901562Z 1:M 15 Mar 20:26:20.170 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 15 20:26:24.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5rdqz redis-master --namespace=e2e-tests-kubectl-8zvlb --since=1s' Mar 15 20:26:24.117: INFO: stderr: "" Mar 15 20:26:24.117: INFO: stdout: "" Mar 15 20:26:24.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5rdqz redis-master --namespace=e2e-tests-kubectl-8zvlb --since=24h' Mar 15 20:26:24.223: INFO: stderr: "" Mar 15 20:26:24.223: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Mar 20:26:20.170 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Mar 20:26:20.170 # Server started, Redis version 3.2.12\n1:M 15 Mar 20:26:20.170 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Mar 20:26:20.170 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Mar 15 20:26:24.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8zvlb' Mar 15 20:26:24.347: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 20:26:24.348: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 15 20:26:24.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-8zvlb' Mar 15 20:26:24.497: INFO: stderr: "No resources found.\n" Mar 15 20:26:24.497: INFO: stdout: "" Mar 15 20:26:24.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-8zvlb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 15 20:26:24.591: INFO: stderr: "" Mar 15 20:26:24.591: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:26:24.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8zvlb" for this suite. Mar 15 20:26:48.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:26:48.899: INFO: namespace: e2e-tests-kubectl-8zvlb, resource: bindings, ignored listing per whitelist Mar 15 20:26:48.921: INFO: namespace e2e-tests-kubectl-8zvlb deletion completed in 24.136800526s • [SLOW TEST:32.350 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:26:48.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 20:26:49.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c3c5f28-66fb-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-msfj2" to be "success or failure" Mar 15 20:26:49.023: INFO: Pod "downwardapi-volume-4c3c5f28-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.612191ms Mar 15 20:26:51.198: INFO: Pod "downwardapi-volume-4c3c5f28-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177947395s Mar 15 20:26:53.202: INFO: Pod "downwardapi-volume-4c3c5f28-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182187362s Mar 15 20:26:55.206: INFO: Pod "downwardapi-volume-4c3c5f28-66fb-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.186731388s STEP: Saw pod success Mar 15 20:26:55.206: INFO: Pod "downwardapi-volume-4c3c5f28-66fb-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:26:55.210: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4c3c5f28-66fb-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 20:26:55.289: INFO: Waiting for pod downwardapi-volume-4c3c5f28-66fb-11ea-9ccf-0242ac110012 to disappear Mar 15 20:26:55.292: INFO: Pod downwardapi-volume-4c3c5f28-66fb-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:26:55.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-msfj2" for this suite. Mar 15 20:27:01.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:27:01.315: INFO: namespace: e2e-tests-projected-msfj2, resource: bindings, ignored listing per whitelist Mar 15 20:27:01.379: INFO: namespace e2e-tests-projected-msfj2 deletion completed in 6.084470218s • [SLOW TEST:12.458 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:27:01.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Mar 15 20:27:01.780: INFO: Waiting up to 5m0s for pod "var-expansion-53d88e55-66fb-11ea-9ccf-0242ac110012" in namespace "e2e-tests-var-expansion-xz9tk" to be "success or failure" Mar 15 20:27:01.796: INFO: Pod "var-expansion-53d88e55-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 16.227522ms Mar 15 20:27:03.800: INFO: Pod "var-expansion-53d88e55-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020469386s Mar 15 20:27:05.862: INFO: Pod "var-expansion-53d88e55-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082539762s Mar 15 20:27:07.866: INFO: Pod "var-expansion-53d88e55-66fb-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08661221s STEP: Saw pod success Mar 15 20:27:07.866: INFO: Pod "var-expansion-53d88e55-66fb-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:27:07.869: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-53d88e55-66fb-11ea-9ccf-0242ac110012 container dapi-container: STEP: delete the pod Mar 15 20:27:08.488: INFO: Waiting for pod var-expansion-53d88e55-66fb-11ea-9ccf-0242ac110012 to disappear Mar 15 20:27:08.766: INFO: Pod var-expansion-53d88e55-66fb-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:27:08.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xz9tk" for this suite. Mar 15 20:27:14.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:27:14.868: INFO: namespace: e2e-tests-var-expansion-xz9tk, resource: bindings, ignored listing per whitelist Mar 15 20:27:14.883: INFO: namespace e2e-tests-var-expansion-xz9tk deletion completed in 6.113270765s • [SLOW TEST:13.504 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:27:14.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 15 20:27:30.324: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:27:31.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-cwzds" for this suite. Mar 15 20:27:55.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:27:55.434: INFO: namespace: e2e-tests-replicaset-cwzds, resource: bindings, ignored listing per whitelist Mar 15 20:27:55.459: INFO: namespace e2e-tests-replicaset-cwzds deletion completed in 24.098851962s • [SLOW TEST:40.575 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:27:55.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0315 20:28:05.676529 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 20:28:05.676: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:28:05.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-x6g6v" for this suite. Mar 15 20:28:11.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:28:11.784: INFO: namespace: e2e-tests-gc-x6g6v, resource: bindings, ignored listing per whitelist Mar 15 20:28:11.790: INFO: namespace e2e-tests-gc-x6g6v deletion completed in 6.110868049s • [SLOW TEST:16.331 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:28:11.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-ch9st in namespace e2e-tests-proxy-kc99t I0315 20:28:12.065982 6 runners.go:184] Created replication controller with name: proxy-service-ch9st, namespace: e2e-tests-proxy-kc99t, replica count: 1 I0315 20:28:13.116322 6 runners.go:184] proxy-service-ch9st Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 20:28:14.116568 6 runners.go:184] proxy-service-ch9st Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 20:28:15.116814 6 runners.go:184] proxy-service-ch9st Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 20:28:16.117066 6 runners.go:184] proxy-service-ch9st Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 20:28:17.117408 6 runners.go:184] proxy-service-ch9st Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 20:28:18.117615 6 runners.go:184] proxy-service-ch9st Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0315 20:28:19.117839 6 runners.go:184] proxy-service-ch9st Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0315 20:28:20.118094 6 runners.go:184] proxy-service-ch9st Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 15 20:28:20.121: INFO: setup took 8.114157146s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 15 20:28:20.128: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kc99t/pods/proxy-service-ch9st-455hz:160/proxy/: foo (200; 6.530127ms) Mar 15 20:28:20.129: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kc99t/pods/proxy-service-ch9st-455hz:162/proxy/: bar (200; 7.473396ms) Mar 15 20:28:20.129: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kc99t/pods/http:proxy-service-ch9st-455hz:162/proxy/: bar (200; 7.416133ms) Mar 15 20:28:20.129: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kc99t/pods/http:proxy-service-ch9st-455hz:160/proxy/: foo (200; 7.754598ms) Mar 15 20:28:20.129: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kc99t/services/http:proxy-service-ch9st:portname2/proxy/: bar (200; 7.77943ms) Mar 15 20:28:20.129: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kc99t/pods/proxy-service-ch9st-455hz:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-p2cc STEP: Creating a pod to test atomic-volume-subpath Mar 15 20:28:37.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p2cc" in namespace "e2e-tests-subpath-xndt5" to be "success or failure" Mar 15 20:28:37.594: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.762636ms Mar 15 20:28:39.598: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026370438s Mar 15 20:28:41.601: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029922943s Mar 15 20:28:43.605: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034170602s Mar 15 20:28:45.610: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=true. Elapsed: 8.038420771s Mar 15 20:28:47.614: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=false. Elapsed: 10.042414265s Mar 15 20:28:49.618: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=false. Elapsed: 12.046581635s Mar 15 20:28:51.622: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=false. Elapsed: 14.051153991s Mar 15 20:28:53.626: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=false. Elapsed: 16.054865393s Mar 15 20:28:55.630: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=false. Elapsed: 18.058788963s Mar 15 20:28:57.634: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=false. Elapsed: 20.063050628s Mar 15 20:28:59.639: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=false. Elapsed: 22.067224919s Mar 15 20:29:01.642: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=false. Elapsed: 24.070921295s Mar 15 20:29:03.646: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Running", Reason="", readiness=false. Elapsed: 26.074949844s Mar 15 20:29:05.650: INFO: Pod "pod-subpath-test-configmap-p2cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.0788873s STEP: Saw pod success Mar 15 20:29:05.650: INFO: Pod "pod-subpath-test-configmap-p2cc" satisfied condition "success or failure" Mar 15 20:29:05.653: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-p2cc container test-container-subpath-configmap-p2cc: STEP: delete the pod Mar 15 20:29:05.683: INFO: Waiting for pod pod-subpath-test-configmap-p2cc to disappear Mar 15 20:29:05.714: INFO: Pod pod-subpath-test-configmap-p2cc no longer exists STEP: Deleting pod pod-subpath-test-configmap-p2cc Mar 15 20:29:05.714: INFO: Deleting pod "pod-subpath-test-configmap-p2cc" in namespace "e2e-tests-subpath-xndt5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:29:05.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-xndt5" for this suite. Mar 15 20:29:13.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:29:13.895: INFO: namespace: e2e-tests-subpath-xndt5, resource: bindings, ignored listing per whitelist Mar 15 20:29:13.902: INFO: namespace e2e-tests-subpath-xndt5 deletion completed in 8.182545498s • [SLOW TEST:36.452 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:29:13.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 20:29:14.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-w7cfv' Mar 15 20:29:14.785: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 15 20:29:14.785: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Mar 15 20:29:16.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-w7cfv' Mar 15 20:29:16.920: INFO: stderr: "" Mar 15 20:29:16.920: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:29:16.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w7cfv" for this suite. Mar 15 20:29:23.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:29:23.118: INFO: namespace: e2e-tests-kubectl-w7cfv, resource: bindings, ignored listing per whitelist Mar 15 20:29:23.177: INFO: namespace e2e-tests-kubectl-w7cfv deletion completed in 6.20532797s • [SLOW TEST:9.274 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:29:23.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-fj4vj I0315 20:29:23.295595 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-fj4vj, replica count: 1 I0315 20:29:24.346058 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 20:29:25.346294 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 20:29:26.346564 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 20:29:27.346768 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 15 20:29:27.529: INFO: Created: latency-svc-zsx2m Mar 15 20:29:27.536: INFO: Got endpoints: latency-svc-zsx2m [89.822501ms] Mar 15 20:29:27.585: INFO: Created: latency-svc-kvh7n Mar 15 20:29:27.595: INFO: Got endpoints: latency-svc-kvh7n [58.103081ms] Mar 15 20:29:27.614: INFO: Created: latency-svc-xlbm8 Mar 15 20:29:27.624: INFO: Got endpoints: latency-svc-xlbm8 [87.945609ms] Mar 15 20:29:27.709: INFO: Created: latency-svc-frfzr Mar 15 20:29:27.727: INFO: Got endpoints: latency-svc-frfzr [190.573721ms] Mar 15 20:29:27.782: INFO: Created: latency-svc-qhg6l Mar 15 20:29:27.840: INFO: Got endpoints: latency-svc-qhg6l [303.545794ms] Mar 15 20:29:27.878: INFO: Created: latency-svc-vtplr Mar 15 20:29:27.889: INFO: Got endpoints: latency-svc-vtplr [352.508169ms] Mar 15 20:29:27.908: INFO: Created: latency-svc-h2zks Mar 15 20:29:27.920: INFO: Got endpoints: latency-svc-h2zks [382.951409ms] Mar 15 20:29:28.020: INFO: Created: latency-svc-27wvs Mar 15 20:29:28.025: INFO: Got endpoints: latency-svc-27wvs [488.594271ms] Mar 15 20:29:28.064: INFO: Created: latency-svc-xkz8r Mar 15 20:29:28.076: INFO: Got endpoints: latency-svc-xkz8r [539.153785ms] Mar 15 20:29:28.112: INFO: Created: latency-svc-wmb6w Mar 15 20:29:28.194: INFO: Got endpoints: latency-svc-wmb6w [656.877618ms] Mar 15 20:29:28.196: INFO: Created: latency-svc-fp88w Mar 15 20:29:28.208: INFO: Got endpoints: latency-svc-fp88w [671.349913ms] Mar 15 20:29:28.251: INFO: Created: latency-svc-4trvx Mar 15 20:29:28.262: INFO: Got endpoints: latency-svc-4trvx [725.480043ms] Mar 15 20:29:28.280: INFO: Created: latency-svc-blwv7 Mar 15 20:29:28.293: INFO: Got endpoints: latency-svc-blwv7 [755.687597ms] Mar 15 20:29:28.355: INFO: Created: latency-svc-2d25d Mar 15 20:29:28.369: INFO: Got endpoints: latency-svc-2d25d [832.496618ms] Mar 15 20:29:28.394: INFO: Created: latency-svc-fs22h Mar 15 20:29:28.417: INFO: Got endpoints: latency-svc-fs22h [880.383252ms] Mar 15 20:29:28.548: INFO: Created: latency-svc-pqc7s Mar 15 20:29:28.574: INFO: Got endpoints: latency-svc-pqc7s [1.037294769s] Mar 15 20:29:28.575: INFO: Created: latency-svc-qbzpr Mar 15 20:29:28.598: INFO: Got endpoints: latency-svc-qbzpr [1.003217167s] Mar 15 20:29:28.628: INFO: Created: latency-svc-ztnzk Mar 15 20:29:28.642: INFO: Got endpoints: latency-svc-ztnzk [1.017875148s] Mar 15 20:29:28.685: INFO: Created: latency-svc-b7h2n Mar 15 20:29:28.688: INFO: Got endpoints: latency-svc-b7h2n [960.454267ms] Mar 15 20:29:28.712: INFO: Created: latency-svc-mk9cr Mar 15 20:29:28.727: INFO: Got endpoints: latency-svc-mk9cr [886.791531ms] Mar 15 20:29:28.748: INFO: Created: latency-svc-zd46q Mar 15 20:29:28.763: INFO: Got endpoints: latency-svc-zd46q [873.952735ms] Mar 15 20:29:28.847: INFO: Created: latency-svc-lvrlx Mar 15 20:29:28.856: INFO: Got endpoints: latency-svc-lvrlx [936.001444ms] Mar 15 20:29:28.904: INFO: Created: latency-svc-5ggfn Mar 15 20:29:28.934: INFO: Got endpoints: latency-svc-5ggfn [170.34625ms] Mar 15 20:29:28.990: INFO: Created: latency-svc-jn4l4 Mar 15 20:29:28.994: INFO: Got endpoints: latency-svc-jn4l4 [968.457386ms] Mar 15 20:29:29.043: INFO: Created: latency-svc-m2fp6 Mar 15 20:29:29.052: INFO: Got endpoints: latency-svc-m2fp6 [976.063522ms] Mar 15 20:29:29.078: INFO: Created: latency-svc-lwsgn Mar 15 20:29:29.088: INFO: Got endpoints: latency-svc-lwsgn [894.37406ms] Mar 15 20:29:29.158: INFO: Created: latency-svc-l9wpd Mar 15 20:29:29.161: INFO: Got endpoints: latency-svc-l9wpd [953.181155ms] Mar 15 20:29:29.186: INFO: Created: latency-svc-t6fzg Mar 15 20:29:29.197: INFO: Got endpoints: latency-svc-t6fzg [934.784764ms] Mar 15 20:29:29.217: INFO: Created: latency-svc-nph47 Mar 15 20:29:29.227: INFO: Got endpoints: latency-svc-nph47 [934.379796ms] Mar 15 20:29:29.246: INFO: Created: latency-svc-6m2kf Mar 15 20:29:29.295: INFO: Got endpoints: latency-svc-6m2kf [925.478711ms] Mar 15 20:29:29.297: INFO: Created: latency-svc-5l89r Mar 15 20:29:29.306: INFO: Got endpoints: latency-svc-5l89r [888.499872ms] Mar 15 20:29:29.324: INFO: Created: latency-svc-6q5gr Mar 15 20:29:29.336: INFO: Got endpoints: latency-svc-6q5gr [762.061126ms] Mar 15 20:29:29.354: INFO: Created: latency-svc-d22fw Mar 15 20:29:29.367: INFO: Got endpoints: latency-svc-d22fw [768.734841ms] Mar 15 20:29:29.384: INFO: Created: latency-svc-pzxq2 Mar 15 20:29:29.463: INFO: Got endpoints: latency-svc-pzxq2 [820.073538ms] Mar 15 20:29:29.465: INFO: Created: latency-svc-b54xn Mar 15 20:29:29.481: INFO: Got endpoints: latency-svc-b54xn [793.61169ms] Mar 15 20:29:29.503: INFO: Created: latency-svc-6xc2b Mar 15 20:29:29.530: INFO: Got endpoints: latency-svc-6xc2b [802.72243ms] Mar 15 20:29:29.637: INFO: Created: latency-svc-q78c5 Mar 15 20:29:29.650: INFO: Got endpoints: latency-svc-q78c5 [793.882463ms] Mar 15 20:29:29.666: INFO: Created: latency-svc-wlsl6 Mar 15 20:29:29.680: INFO: Got endpoints: latency-svc-wlsl6 [746.185502ms] Mar 15 20:29:29.702: INFO: Created: latency-svc-5mfpr Mar 15 20:29:29.710: INFO: Got endpoints: latency-svc-5mfpr [716.65157ms] Mar 15 20:29:29.732: INFO: Created: latency-svc-szhwn Mar 15 20:29:29.786: INFO: Got endpoints: latency-svc-szhwn [734.262787ms] Mar 15 20:29:29.789: INFO: Created: latency-svc-2n7mw Mar 15 20:29:29.795: INFO: Got endpoints: latency-svc-2n7mw [706.31152ms] Mar 15 20:29:29.815: INFO: Created: latency-svc-vs46k Mar 15 20:29:29.832: INFO: Got endpoints: latency-svc-vs46k [670.329254ms] Mar 15 20:29:29.851: INFO: Created: latency-svc-sb2q2 Mar 15 20:29:29.881: INFO: Got endpoints: latency-svc-sb2q2 [683.870687ms] Mar 15 20:29:29.961: INFO: Created: latency-svc-t272p Mar 15 20:29:29.963: INFO: Got endpoints: latency-svc-t272p [735.98475ms] Mar 15 20:29:29.990: INFO: Created: latency-svc-kmflx Mar 15 20:29:30.000: INFO: Got endpoints: latency-svc-kmflx [704.848484ms] Mar 15 20:29:30.019: INFO: Created: latency-svc-gmf9r Mar 15 20:29:30.031: INFO: Got endpoints: latency-svc-gmf9r [725.387395ms] Mar 15 20:29:30.049: INFO: Created: latency-svc-wbw7f Mar 15 20:29:30.109: INFO: Got endpoints: latency-svc-wbw7f [772.985115ms] Mar 15 20:29:30.113: INFO: Created: latency-svc-6c75x Mar 15 20:29:30.128: INFO: Got endpoints: latency-svc-6c75x [760.745222ms] Mar 15 20:29:30.151: INFO: Created: latency-svc-wtlfq Mar 15 20:29:30.163: INFO: Got endpoints: latency-svc-wtlfq [700.473228ms] Mar 15 20:29:30.181: INFO: Created: latency-svc-9q69w Mar 15 20:29:30.194: INFO: Got endpoints: latency-svc-9q69w [712.178516ms] Mar 15 20:29:30.295: INFO: Created: latency-svc-bn8ql Mar 15 20:29:30.300: INFO: Got endpoints: latency-svc-bn8ql [770.520763ms] Mar 15 20:29:30.325: INFO: Created: latency-svc-8cf4q Mar 15 20:29:30.338: INFO: Got endpoints: latency-svc-8cf4q [688.403471ms] Mar 15 20:29:30.354: INFO: Created: latency-svc-4gqrl Mar 15 20:29:30.381: INFO: Got endpoints: latency-svc-4gqrl [700.721053ms] Mar 15 20:29:30.451: INFO: Created: latency-svc-vcd4c Mar 15 20:29:30.455: INFO: Got endpoints: latency-svc-vcd4c [744.183825ms] Mar 15 20:29:30.475: INFO: Created: latency-svc-j66xn Mar 15 20:29:30.489: INFO: Got endpoints: latency-svc-j66xn [702.639919ms] Mar 15 20:29:30.517: INFO: Created: latency-svc-jjk7q Mar 15 20:29:30.531: INFO: Got endpoints: latency-svc-jjk7q [736.825338ms] Mar 15 20:29:30.613: INFO: Created: latency-svc-676cw Mar 15 20:29:30.616: INFO: Got endpoints: latency-svc-676cw [784.439419ms] Mar 15 20:29:30.642: INFO: Created: latency-svc-n7vc5 Mar 15 20:29:30.658: INFO: Got endpoints: latency-svc-n7vc5 [776.845163ms] Mar 15 20:29:30.679: INFO: Created: latency-svc-k2zsd Mar 15 20:29:30.708: INFO: Got endpoints: latency-svc-k2zsd [745.101413ms] Mar 15 20:29:30.787: INFO: Created: latency-svc-m6vff Mar 15 20:29:30.805: INFO: Got endpoints: latency-svc-m6vff [804.983123ms] Mar 15 20:29:30.859: INFO: Created: latency-svc-6wlks Mar 15 20:29:30.874: INFO: Got endpoints: latency-svc-6wlks [843.170675ms] Mar 15 20:29:30.961: INFO: Created: latency-svc-47s6r Mar 15 20:29:30.964: INFO: Got endpoints: latency-svc-47s6r [854.150783ms] Mar 15 20:29:30.991: INFO: Created: latency-svc-lbm96 Mar 15 20:29:31.001: INFO: Got endpoints: latency-svc-lbm96 [873.296715ms] Mar 15 20:29:31.027: INFO: Created: latency-svc-g4wv4 Mar 15 20:29:31.038: INFO: Got endpoints: latency-svc-g4wv4 [875.148521ms] Mar 15 20:29:31.124: INFO: Created: latency-svc-22bdz Mar 15 20:29:31.142: INFO: Got endpoints: latency-svc-22bdz [947.953947ms] Mar 15 20:29:31.220: INFO: Created: latency-svc-bd487 Mar 15 20:29:31.283: INFO: Got endpoints: latency-svc-bd487 [982.928024ms] Mar 15 20:29:31.296: INFO: Created: latency-svc-4dkvw Mar 15 20:29:31.308: INFO: Got endpoints: latency-svc-4dkvw [969.484876ms] Mar 15 20:29:31.338: INFO: Created: latency-svc-s9rvb Mar 15 20:29:31.350: INFO: Got endpoints: latency-svc-s9rvb [969.379743ms] Mar 15 20:29:31.368: INFO: Created: latency-svc-4llvr Mar 15 20:29:31.451: INFO: Got endpoints: latency-svc-4llvr [995.984307ms] Mar 15 20:29:31.453: INFO: Created: latency-svc-jg277 Mar 15 20:29:31.477: INFO: Got endpoints: latency-svc-jg277 [987.370448ms] Mar 15 20:29:31.501: INFO: Created: latency-svc-sb8mm Mar 15 20:29:31.513: INFO: Got endpoints: latency-svc-sb8mm [981.436423ms] Mar 15 20:29:31.532: INFO: Created: latency-svc-q7lnw Mar 15 20:29:31.544: INFO: Got endpoints: latency-svc-q7lnw [927.240167ms] Mar 15 20:29:31.625: INFO: Created: latency-svc-7ttj7 Mar 15 20:29:31.628: INFO: Got endpoints: latency-svc-7ttj7 [970.034449ms] Mar 15 20:29:31.651: INFO: Created: latency-svc-xdw2f Mar 15 20:29:31.664: INFO: Got endpoints: latency-svc-xdw2f [955.773409ms] Mar 15 20:29:31.686: INFO: Created: latency-svc-jzfkv Mar 15 20:29:31.700: INFO: Got endpoints: latency-svc-jzfkv [895.300917ms] Mar 15 20:29:31.799: INFO: Created: latency-svc-bqqqw Mar 15 20:29:31.802: INFO: Got endpoints: latency-svc-bqqqw [927.179322ms] Mar 15 20:29:31.848: INFO: Created: latency-svc-qbjll Mar 15 20:29:31.863: INFO: Got endpoints: latency-svc-qbjll [899.428854ms] Mar 15 20:29:31.884: INFO: Created: latency-svc-hztdv Mar 15 20:29:31.978: INFO: Got endpoints: latency-svc-hztdv [976.719996ms] Mar 15 20:29:31.980: INFO: Created: latency-svc-9d8p2 Mar 15 20:29:31.983: INFO: Got endpoints: latency-svc-9d8p2 [944.643645ms] Mar 15 20:29:32.011: INFO: Created: latency-svc-8twxk Mar 15 20:29:32.014: INFO: Got endpoints: latency-svc-8twxk [872.145993ms] Mar 15 20:29:32.041: INFO: Created: latency-svc-bjntt Mar 15 20:29:32.044: INFO: Got endpoints: latency-svc-bjntt [760.192346ms] Mar 15 20:29:32.071: INFO: Created: latency-svc-m829j Mar 15 20:29:32.074: INFO: Got endpoints: latency-svc-m829j [765.994338ms] Mar 15 20:29:32.139: INFO: Created: latency-svc-bpllq Mar 15 20:29:32.147: INFO: Got endpoints: latency-svc-bpllq [796.520366ms] Mar 15 20:29:32.166: INFO: Created: latency-svc-nq9pq Mar 15 20:29:32.195: INFO: Got endpoints: latency-svc-nq9pq [743.923409ms] Mar 15 20:29:32.233: INFO: Created: latency-svc-cdp5k Mar 15 20:29:32.307: INFO: Got endpoints: latency-svc-cdp5k [830.219474ms] Mar 15 20:29:32.329: INFO: Created: latency-svc-sfxjl Mar 15 20:29:32.352: INFO: Got endpoints: latency-svc-sfxjl [839.107585ms] Mar 15 20:29:32.406: INFO: Created: latency-svc-r9nv2 Mar 15 20:29:32.457: INFO: Got endpoints: latency-svc-r9nv2 [913.232775ms] Mar 15 20:29:32.479: INFO: Created: latency-svc-kbmrv Mar 15 20:29:32.490: INFO: Got endpoints: latency-svc-kbmrv [861.920067ms] Mar 15 20:29:32.521: INFO: Created: latency-svc-c4snw Mar 15 20:29:32.532: INFO: Got endpoints: latency-svc-c4snw [867.997219ms] Mar 15 20:29:32.550: INFO: Created: latency-svc-x9s5s Mar 15 20:29:32.600: INFO: Got endpoints: latency-svc-x9s5s [900.11572ms] Mar 15 20:29:32.602: INFO: Created: latency-svc-fh854 Mar 15 20:29:32.611: INFO: Got endpoints: latency-svc-fh854 [809.009864ms] Mar 15 20:29:32.646: INFO: Created: latency-svc-6gx8s Mar 15 20:29:32.659: INFO: Got endpoints: latency-svc-6gx8s [796.091096ms] Mar 15 20:29:32.682: INFO: Created: latency-svc-nt982 Mar 15 20:29:32.695: INFO: Got endpoints: latency-svc-nt982 [717.735833ms] Mar 15 20:29:32.750: INFO: Created: latency-svc-q5k99 Mar 15 20:29:32.753: INFO: Got endpoints: latency-svc-q5k99 [770.430068ms] Mar 15 20:29:32.791: INFO: Created: latency-svc-vrc46 Mar 15 20:29:32.826: INFO: Got endpoints: latency-svc-vrc46 [812.543274ms] Mar 15 20:29:32.895: INFO: Created: latency-svc-82hqc Mar 15 20:29:32.909: INFO: Got endpoints: latency-svc-82hqc [865.803124ms] Mar 15 20:29:32.940: INFO: Created: latency-svc-dzj6w Mar 15 20:29:32.955: INFO: Got endpoints: latency-svc-dzj6w [880.65454ms] Mar 15 20:29:32.976: INFO: Created: latency-svc-7lh25 Mar 15 20:29:32.991: INFO: Got endpoints: latency-svc-7lh25 [844.086778ms] Mar 15 20:29:33.044: INFO: Created: latency-svc-6m8fw Mar 15 20:29:33.046: INFO: Got endpoints: latency-svc-6m8fw [851.425983ms] Mar 15 20:29:33.072: INFO: Created: latency-svc-k27bh Mar 15 20:29:33.087: INFO: Got endpoints: latency-svc-k27bh [780.010741ms] Mar 15 20:29:33.108: INFO: Created: latency-svc-qrsjp Mar 15 20:29:33.130: INFO: Got endpoints: latency-svc-qrsjp [777.681254ms] Mar 15 20:29:33.236: INFO: Created: latency-svc-jcl8f Mar 15 20:29:33.239: INFO: Got endpoints: latency-svc-jcl8f [781.936424ms] Mar 15 20:29:33.266: INFO: Created: latency-svc-js27l Mar 15 20:29:33.286: INFO: Got endpoints: latency-svc-js27l [795.982734ms] Mar 15 20:29:33.324: INFO: Created: latency-svc-bdcnz Mar 15 20:29:33.335: INFO: Got endpoints: latency-svc-bdcnz [802.396762ms] Mar 15 20:29:33.379: INFO: Created: latency-svc-ngl78 Mar 15 20:29:33.389: INFO: Got endpoints: latency-svc-ngl78 [788.783064ms] Mar 15 20:29:33.420: INFO: Created: latency-svc-6vxzd Mar 15 20:29:33.444: INFO: Got endpoints: latency-svc-6vxzd [832.715395ms] Mar 15 20:29:33.474: INFO: Created: latency-svc-x7ctn Mar 15 20:29:33.553: INFO: Got endpoints: latency-svc-x7ctn [893.739706ms] Mar 15 20:29:33.556: INFO: Created: latency-svc-hn2m5 Mar 15 20:29:33.563: INFO: Got endpoints: latency-svc-hn2m5 [867.899887ms] Mar 15 20:29:33.582: INFO: Created: latency-svc-nzvd4 Mar 15 20:29:33.594: INFO: Got endpoints: latency-svc-nzvd4 [840.743381ms] Mar 15 20:29:33.611: INFO: Created: latency-svc-4jbv2 Mar 15 20:29:33.624: INFO: Got endpoints: latency-svc-4jbv2 [797.822621ms] Mar 15 20:29:33.641: INFO: Created: latency-svc-gh7fb Mar 15 20:29:33.702: INFO: Got endpoints: latency-svc-gh7fb [792.884667ms] Mar 15 20:29:33.706: INFO: Created: latency-svc-dfk8r Mar 15 20:29:34.178: INFO: Got endpoints: latency-svc-dfk8r [1.223341117s] Mar 15 20:29:34.871: INFO: Created: latency-svc-4l26m Mar 15 20:29:34.900: INFO: Got endpoints: latency-svc-4l26m [1.908787203s] Mar 15 20:29:34.936: INFO: Created: latency-svc-www22 Mar 15 20:29:34.961: INFO: Got endpoints: latency-svc-www22 [1.915152676s] Mar 15 20:29:35.039: INFO: Created: latency-svc-lgv9t Mar 15 20:29:35.051: INFO: Got endpoints: latency-svc-lgv9t [1.963989393s] Mar 15 20:29:35.116: INFO: Created: latency-svc-pr4ch Mar 15 20:29:35.130: INFO: Got endpoints: latency-svc-pr4ch [1.999665094s] Mar 15 20:29:35.176: INFO: Created: latency-svc-l4bmp Mar 15 20:29:35.184: INFO: Got endpoints: latency-svc-l4bmp [1.944735055s] Mar 15 20:29:35.212: INFO: Created: latency-svc-kshj2 Mar 15 20:29:35.226: INFO: Got endpoints: latency-svc-kshj2 [1.940198254s] Mar 15 20:29:35.253: INFO: Created: latency-svc-9pcc9 Mar 15 20:29:35.272: INFO: Got endpoints: latency-svc-9pcc9 [1.93788365s] Mar 15 20:29:35.307: INFO: Created: latency-svc-r2gjt Mar 15 20:29:35.319: INFO: Got endpoints: latency-svc-r2gjt [1.929542627s] Mar 15 20:29:35.361: INFO: Created: latency-svc-5cnfp Mar 15 20:29:35.383: INFO: Got endpoints: latency-svc-5cnfp [1.93971312s] Mar 15 20:29:35.481: INFO: Created: latency-svc-ltbnf Mar 15 20:29:35.484: INFO: Got endpoints: latency-svc-ltbnf [1.931088228s] Mar 15 20:29:35.511: INFO: Created: latency-svc-xwp2f Mar 15 20:29:35.541: INFO: Got endpoints: latency-svc-xwp2f [1.977745779s] Mar 15 20:29:35.571: INFO: Created: latency-svc-gr649 Mar 15 20:29:35.624: INFO: Got endpoints: latency-svc-gr649 [2.030218678s] Mar 15 20:29:35.630: INFO: Created: latency-svc-v6rvm Mar 15 20:29:35.633: INFO: Got endpoints: latency-svc-v6rvm [2.008731008s] Mar 15 20:29:35.661: INFO: Created: latency-svc-fcmrp Mar 15 20:29:35.672: INFO: Got endpoints: latency-svc-fcmrp [1.969941275s] Mar 15 20:29:35.709: INFO: Created: latency-svc-jn6hw Mar 15 20:29:35.721: INFO: Got endpoints: latency-svc-jn6hw [1.542836265s] Mar 15 20:29:35.762: INFO: Created: latency-svc-jftzm Mar 15 20:29:35.765: INFO: Got endpoints: latency-svc-jftzm [865.440531ms] Mar 15 20:29:35.788: INFO: Created: latency-svc-hqdlw Mar 15 20:29:35.794: INFO: Got endpoints: latency-svc-hqdlw [832.167642ms] Mar 15 20:29:35.811: INFO: Created: latency-svc-whkqj Mar 15 20:29:35.818: INFO: Got endpoints: latency-svc-whkqj [766.406654ms] Mar 15 20:29:35.835: INFO: Created: latency-svc-47srk Mar 15 20:29:35.848: INFO: Got endpoints: latency-svc-47srk [718.484839ms] Mar 15 20:29:35.912: INFO: Created: latency-svc-59dfm Mar 15 20:29:35.915: INFO: Got endpoints: latency-svc-59dfm [731.377326ms] Mar 15 20:29:35.937: INFO: Created: latency-svc-8sm4m Mar 15 20:29:35.961: INFO: Got endpoints: latency-svc-8sm4m [734.558019ms] Mar 15 20:29:35.985: INFO: Created: latency-svc-tqgdk Mar 15 20:29:35.999: INFO: Got endpoints: latency-svc-tqgdk [726.535324ms] Mar 15 20:29:36.074: INFO: Created: latency-svc-cttwf Mar 15 20:29:36.077: INFO: Got endpoints: latency-svc-cttwf [757.99501ms] Mar 15 20:29:36.105: INFO: Created: latency-svc-cv6h6 Mar 15 20:29:36.120: INFO: Got endpoints: latency-svc-cv6h6 [736.431928ms] Mar 15 20:29:36.140: INFO: Created: latency-svc-xqlk8 Mar 15 20:29:36.156: INFO: Got endpoints: latency-svc-xqlk8 [671.44229ms] Mar 15 20:29:36.254: INFO: Created: latency-svc-6hrx7 Mar 15 20:29:36.264: INFO: Got endpoints: latency-svc-6hrx7 [722.532356ms] Mar 15 20:29:36.286: INFO: Created: latency-svc-2shw6 Mar 15 20:29:36.300: INFO: Got endpoints: latency-svc-2shw6 [675.813923ms] Mar 15 20:29:36.340: INFO: Created: latency-svc-gdts5 Mar 15 20:29:36.348: INFO: Got endpoints: latency-svc-gdts5 [715.351918ms] Mar 15 20:29:36.409: INFO: Created: latency-svc-n7khv Mar 15 20:29:36.416: INFO: Got endpoints: latency-svc-n7khv [743.599842ms] Mar 15 20:29:36.458: INFO: Created: latency-svc-8w9v4 Mar 15 20:29:36.475: INFO: Got endpoints: latency-svc-8w9v4 [754.315305ms] Mar 15 20:29:36.501: INFO: Created: latency-svc-72jst Mar 15 20:29:36.559: INFO: Got endpoints: latency-svc-72jst [793.539302ms] Mar 15 20:29:36.561: INFO: Created: latency-svc-c2ljq Mar 15 20:29:36.566: INFO: Got endpoints: latency-svc-c2ljq [772.078006ms] Mar 15 20:29:36.585: INFO: Created: latency-svc-t95dd Mar 15 20:29:36.596: INFO: Got endpoints: latency-svc-t95dd [777.966529ms] Mar 15 20:29:36.615: INFO: Created: latency-svc-jkjw9 Mar 15 20:29:36.627: INFO: Got endpoints: latency-svc-jkjw9 [778.529087ms] Mar 15 20:29:36.645: INFO: Created: latency-svc-jq8t7 Mar 15 20:29:36.657: INFO: Got endpoints: latency-svc-jq8t7 [741.25997ms] Mar 15 20:29:36.697: INFO: Created: latency-svc-jzkhb Mar 15 20:29:36.699: INFO: Got endpoints: latency-svc-jzkhb [738.326168ms] Mar 15 20:29:36.722: INFO: Created: latency-svc-lxmnx Mar 15 20:29:36.729: INFO: Got endpoints: latency-svc-lxmnx [729.96301ms] Mar 15 20:29:36.746: INFO: Created: latency-svc-9ctgr Mar 15 20:29:36.753: INFO: Got endpoints: latency-svc-9ctgr [676.389538ms] Mar 15 20:29:36.771: INFO: Created: latency-svc-4hrtx Mar 15 20:29:36.795: INFO: Got endpoints: latency-svc-4hrtx [674.657646ms] Mar 15 20:29:36.853: INFO: Created: latency-svc-p76c2 Mar 15 20:29:36.861: INFO: Got endpoints: latency-svc-p76c2 [704.738482ms] Mar 15 20:29:36.891: INFO: Created: latency-svc-xwb4t Mar 15 20:29:36.909: INFO: Got endpoints: latency-svc-xwb4t [645.401194ms] Mar 15 20:29:36.939: INFO: Created: latency-svc-vrtd8 Mar 15 20:29:36.951: INFO: Got endpoints: latency-svc-vrtd8 [650.85286ms] Mar 15 20:29:37.002: INFO: Created: latency-svc-tdddz Mar 15 20:29:37.023: INFO: Got endpoints: latency-svc-tdddz [674.88216ms] Mar 15 20:29:37.053: INFO: Created: latency-svc-g99k7 Mar 15 20:29:37.066: INFO: Got endpoints: latency-svc-g99k7 [649.605547ms] Mar 15 20:29:37.195: INFO: Created: latency-svc-579q6 Mar 15 20:29:37.197: INFO: Got endpoints: latency-svc-579q6 [721.698209ms] Mar 15 20:29:37.220: INFO: Created: latency-svc-fx7qs Mar 15 20:29:37.234: INFO: Got endpoints: latency-svc-fx7qs [675.25848ms] Mar 15 20:29:37.256: INFO: Created: latency-svc-rz9wz Mar 15 20:29:37.264: INFO: Got endpoints: latency-svc-rz9wz [698.34656ms] Mar 15 20:29:37.286: INFO: Created: latency-svc-nh47f Mar 15 20:29:37.349: INFO: Got endpoints: latency-svc-nh47f [753.15119ms] Mar 15 20:29:37.352: INFO: Created: latency-svc-tqr9j Mar 15 20:29:37.355: INFO: Got endpoints: latency-svc-tqr9j [727.979846ms] Mar 15 20:29:37.382: INFO: Created: latency-svc-9nk22 Mar 15 20:29:37.397: INFO: Got endpoints: latency-svc-9nk22 [740.79198ms] Mar 15 20:29:37.419: INFO: Created: latency-svc-j85lb Mar 15 20:29:37.433: INFO: Got endpoints: latency-svc-j85lb [734.137625ms] Mar 15 20:29:37.535: INFO: Created: latency-svc-lhwmc Mar 15 20:29:37.562: INFO: Created: latency-svc-lmqsx Mar 15 20:29:37.572: INFO: Got endpoints: latency-svc-lmqsx [818.788008ms] Mar 15 20:29:37.572: INFO: Got endpoints: latency-svc-lhwmc [843.149454ms] Mar 15 20:29:37.610: INFO: Created: latency-svc-cwkmv Mar 15 20:29:37.620: INFO: Got endpoints: latency-svc-cwkmv [825.683888ms] Mar 15 20:29:37.691: INFO: Created: latency-svc-8z27g Mar 15 20:29:37.695: INFO: Got endpoints: latency-svc-8z27g [834.002541ms] Mar 15 20:29:37.718: INFO: Created: latency-svc-cv76t Mar 15 20:29:37.729: INFO: Got endpoints: latency-svc-cv76t [819.558228ms] Mar 15 20:29:37.748: INFO: Created: latency-svc-tzkmt Mar 15 20:29:37.759: INFO: Got endpoints: latency-svc-tzkmt [808.106463ms] Mar 15 20:29:37.778: INFO: Created: latency-svc-h64lj Mar 15 20:29:37.790: INFO: Got endpoints: latency-svc-h64lj [766.484352ms] Mar 15 20:29:37.829: INFO: Created: latency-svc-jtqbh Mar 15 20:29:37.832: INFO: Got endpoints: latency-svc-jtqbh [766.193291ms] Mar 15 20:29:37.862: INFO: Created: latency-svc-xwb78 Mar 15 20:29:37.865: INFO: Got endpoints: latency-svc-xwb78 [668.272108ms] Mar 15 20:29:37.892: INFO: Created: latency-svc-6rwtb Mar 15 20:29:37.906: INFO: Got endpoints: latency-svc-6rwtb [671.358118ms] Mar 15 20:29:37.927: INFO: Created: latency-svc-42v4w Mar 15 20:29:37.984: INFO: Got endpoints: latency-svc-42v4w [720.052828ms] Mar 15 20:29:37.986: INFO: Created: latency-svc-59n5b Mar 15 20:29:37.995: INFO: Got endpoints: latency-svc-59n5b [645.741304ms] Mar 15 20:29:38.018: INFO: Created: latency-svc-znqgc Mar 15 20:29:38.042: INFO: Got endpoints: latency-svc-znqgc [687.402219ms] Mar 15 20:29:38.066: INFO: Created: latency-svc-jjqtd Mar 15 20:29:38.127: INFO: Got endpoints: latency-svc-jjqtd [729.823156ms] Mar 15 20:29:38.144: INFO: Created: latency-svc-kn85h Mar 15 20:29:38.152: INFO: Got endpoints: latency-svc-kn85h [718.197332ms] Mar 15 20:29:38.180: INFO: Created: latency-svc-xq7fw Mar 15 20:29:38.188: INFO: Got endpoints: latency-svc-xq7fw [615.618689ms] Mar 15 20:29:38.210: INFO: Created: latency-svc-lfl9m Mar 15 20:29:38.271: INFO: Got endpoints: latency-svc-lfl9m [698.7392ms] Mar 15 20:29:38.273: INFO: Created: latency-svc-ww9j6 Mar 15 20:29:38.279: INFO: Got endpoints: latency-svc-ww9j6 [658.992344ms] Mar 15 20:29:38.306: INFO: Created: latency-svc-dsl7l Mar 15 20:29:38.327: INFO: Got endpoints: latency-svc-dsl7l [632.727381ms] Mar 15 20:29:38.359: INFO: Created: latency-svc-xsrfc Mar 15 20:29:38.427: INFO: Got endpoints: latency-svc-xsrfc [697.740724ms] Mar 15 20:29:38.429: INFO: Created: latency-svc-cfnc9 Mar 15 20:29:38.442: INFO: Got endpoints: latency-svc-cfnc9 [682.330867ms] Mar 15 20:29:38.474: INFO: Created: latency-svc-n96dt Mar 15 20:29:38.509: INFO: Got endpoints: latency-svc-n96dt [719.493339ms] Mar 15 20:29:38.607: INFO: Created: latency-svc-vs6wn Mar 15 20:29:38.609: INFO: Got endpoints: latency-svc-vs6wn [777.254464ms] Mar 15 20:29:38.635: INFO: Created: latency-svc-8zjqf Mar 15 20:29:38.646: INFO: Got endpoints: latency-svc-8zjqf [781.073812ms] Mar 15 20:29:38.665: INFO: Created: latency-svc-sz2fk Mar 15 20:29:38.677: INFO: Got endpoints: latency-svc-sz2fk [771.177532ms] Mar 15 20:29:38.695: INFO: Created: latency-svc-b5v9f Mar 15 20:29:38.750: INFO: Got endpoints: latency-svc-b5v9f [766.025987ms] Mar 15 20:29:38.752: INFO: Created: latency-svc-c27mc Mar 15 20:29:38.755: INFO: Got endpoints: latency-svc-c27mc [760.607853ms] Mar 15 20:29:38.786: INFO: Created: latency-svc-btxgf Mar 15 20:29:38.791: INFO: Got endpoints: latency-svc-btxgf [748.990723ms] Mar 15 20:29:38.815: INFO: Created: latency-svc-cr89r Mar 15 20:29:38.822: INFO: Got endpoints: latency-svc-cr89r [694.990674ms] Mar 15 20:29:38.839: INFO: Created: latency-svc-9lnh9 Mar 15 20:29:38.906: INFO: Got endpoints: latency-svc-9lnh9 [754.04676ms] Mar 15 20:29:38.923: INFO: Created: latency-svc-hr5g6 Mar 15 20:29:38.937: INFO: Got endpoints: latency-svc-hr5g6 [749.123837ms] Mar 15 20:29:38.965: INFO: Created: latency-svc-slv97 Mar 15 20:29:39.003: INFO: Got endpoints: latency-svc-slv97 [731.568034ms] Mar 15 20:29:39.068: INFO: Created: latency-svc-nn55p Mar 15 20:29:39.081: INFO: Got endpoints: latency-svc-nn55p [801.740628ms] Mar 15 20:29:39.115: INFO: Created: latency-svc-dvvct Mar 15 20:29:39.157: INFO: Got endpoints: latency-svc-dvvct [829.883215ms] Mar 15 20:29:39.224: INFO: Created: latency-svc-rgvjp Mar 15 20:29:39.231: INFO: Got endpoints: latency-svc-rgvjp [804.506385ms] Mar 15 20:29:39.253: INFO: Created: latency-svc-2n7m4 Mar 15 20:29:39.262: INFO: Got endpoints: latency-svc-2n7m4 [819.768533ms] Mar 15 20:29:39.282: INFO: Created: latency-svc-fmv8x Mar 15 20:29:39.292: INFO: Got endpoints: latency-svc-fmv8x [782.429877ms] Mar 15 20:29:39.313: INFO: Created: latency-svc-qjmrc Mar 15 20:29:39.322: INFO: Got endpoints: latency-svc-qjmrc [713.033704ms] Mar 15 20:29:39.322: INFO: Latencies: [58.103081ms 87.945609ms 170.34625ms 190.573721ms 303.545794ms 352.508169ms 382.951409ms 488.594271ms 539.153785ms 615.618689ms 632.727381ms 645.401194ms 645.741304ms 649.605547ms 650.85286ms 656.877618ms 658.992344ms 668.272108ms 670.329254ms 671.349913ms 671.358118ms 671.44229ms 674.657646ms 674.88216ms 675.25848ms 675.813923ms 676.389538ms 682.330867ms 683.870687ms 687.402219ms 688.403471ms 694.990674ms 697.740724ms 698.34656ms 698.7392ms 700.473228ms 700.721053ms 702.639919ms 704.738482ms 704.848484ms 706.31152ms 712.178516ms 713.033704ms 715.351918ms 716.65157ms 717.735833ms 718.197332ms 718.484839ms 719.493339ms 720.052828ms 721.698209ms 722.532356ms 725.387395ms 725.480043ms 726.535324ms 727.979846ms 729.823156ms 729.96301ms 731.377326ms 731.568034ms 734.137625ms 734.262787ms 734.558019ms 735.98475ms 736.431928ms 736.825338ms 738.326168ms 740.79198ms 741.25997ms 743.599842ms 743.923409ms 744.183825ms 745.101413ms 746.185502ms 748.990723ms 749.123837ms 753.15119ms 754.04676ms 754.315305ms 755.687597ms 757.99501ms 760.192346ms 760.607853ms 760.745222ms 762.061126ms 765.994338ms 766.025987ms 766.193291ms 766.406654ms 766.484352ms 768.734841ms 770.430068ms 770.520763ms 771.177532ms 772.078006ms 772.985115ms 776.845163ms 777.254464ms 777.681254ms 777.966529ms 778.529087ms 780.010741ms 781.073812ms 781.936424ms 782.429877ms 784.439419ms 788.783064ms 792.884667ms 793.539302ms 793.61169ms 793.882463ms 795.982734ms 796.091096ms 796.520366ms 797.822621ms 801.740628ms 802.396762ms 802.72243ms 804.506385ms 804.983123ms 808.106463ms 809.009864ms 812.543274ms 818.788008ms 819.558228ms 819.768533ms 820.073538ms 825.683888ms 829.883215ms 830.219474ms 832.167642ms 832.496618ms 832.715395ms 834.002541ms 839.107585ms 840.743381ms 843.149454ms 843.170675ms 844.086778ms 851.425983ms 854.150783ms 861.920067ms 865.440531ms 865.803124ms 867.899887ms 867.997219ms 872.145993ms 873.296715ms 873.952735ms 875.148521ms 880.383252ms 880.65454ms 886.791531ms 888.499872ms 893.739706ms 894.37406ms 895.300917ms 899.428854ms 900.11572ms 913.232775ms 925.478711ms 927.179322ms 927.240167ms 934.379796ms 934.784764ms 936.001444ms 944.643645ms 947.953947ms 953.181155ms 955.773409ms 960.454267ms 968.457386ms 969.379743ms 969.484876ms 970.034449ms 976.063522ms 976.719996ms 981.436423ms 982.928024ms 987.370448ms 995.984307ms 1.003217167s 1.017875148s 1.037294769s 1.223341117s 1.542836265s 1.908787203s 1.915152676s 1.929542627s 1.931088228s 1.93788365s 1.93971312s 1.940198254s 1.944735055s 1.963989393s 1.969941275s 1.977745779s 1.999665094s 2.008731008s 2.030218678s] Mar 15 20:29:39.322: INFO: 50 %ile: 778.529087ms Mar 15 20:29:39.322: INFO: 90 %ile: 995.984307ms Mar 15 20:29:39.322: INFO: 99 %ile: 2.008731008s Mar 15 20:29:39.322: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:29:39.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-fj4vj" for this suite. Mar 15 20:30:05.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:30:05.419: INFO: namespace: e2e-tests-svc-latency-fj4vj, resource: bindings, ignored listing per whitelist Mar 15 20:30:05.497: INFO: namespace e2e-tests-svc-latency-fj4vj deletion completed in 26.129954335s • [SLOW TEST:42.320 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:30:05.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-c16b21e2-66fb-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 20:30:05.628: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c16d2044-66fb-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-gfb7p" to be "success or failure" Mar 15 20:30:05.631: INFO: Pod "pod-projected-secrets-c16d2044-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.382203ms Mar 15 20:30:07.635: INFO: Pod "pod-projected-secrets-c16d2044-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007449137s Mar 15 20:30:09.640: INFO: Pod "pod-projected-secrets-c16d2044-66fb-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012059326s STEP: Saw pod success Mar 15 20:30:09.640: INFO: Pod "pod-projected-secrets-c16d2044-66fb-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:30:09.644: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-c16d2044-66fb-11ea-9ccf-0242ac110012 container projected-secret-volume-test: STEP: delete the pod Mar 15 20:30:09.759: INFO: Waiting for pod pod-projected-secrets-c16d2044-66fb-11ea-9ccf-0242ac110012 to disappear Mar 15 20:30:09.794: INFO: Pod pod-projected-secrets-c16d2044-66fb-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:30:09.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gfb7p" for this suite. Mar 15 20:30:15.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:30:15.917: INFO: namespace: e2e-tests-projected-gfb7p, resource: bindings, ignored listing per whitelist Mar 15 20:30:15.926: INFO: namespace e2e-tests-projected-gfb7p deletion completed in 6.128808406s • [SLOW TEST:10.428 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:30:15.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Mar 15 20:30:16.514: INFO: Waiting up to 5m0s for pod "client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012" in namespace "e2e-tests-containers-8sdkz" to be "success or failure" Mar 15 20:30:16.604: INFO: Pod "client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 89.882399ms Mar 15 20:30:18.608: INFO: Pod "client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093765917s Mar 15 20:30:20.612: INFO: Pod "client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097693961s Mar 15 20:30:22.638: INFO: Pod "client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123375317s Mar 15 20:30:24.642: INFO: Pod "client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127241456s Mar 15 20:30:26.646: INFO: Pod "client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131129022s STEP: Saw pod success Mar 15 20:30:26.646: INFO: Pod "client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:30:26.649: INFO: Trying to get logs from node hunter-worker2 pod client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:30:26.670: INFO: Waiting for pod client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012 to disappear Mar 15 20:30:26.672: INFO: Pod client-containers-c7cdb717-66fb-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:30:26.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8sdkz" for this suite. Mar 15 20:30:32.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:30:32.706: INFO: namespace: e2e-tests-containers-8sdkz, resource: bindings, ignored listing per whitelist Mar 15 20:30:32.761: INFO: namespace e2e-tests-containers-8sdkz deletion completed in 6.085288118s • [SLOW TEST:16.835 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:30:32.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 15 20:30:38.906: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d1a8a608-66fb-11ea-9ccf-0242ac110012,GenerateName:,Namespace:e2e-tests-events-msm66,SelfLink:/api/v1/namespaces/e2e-tests-events-msm66/pods/send-events-d1a8a608-66fb-11ea-9ccf-0242ac110012,UID:d1a93b16-66fb-11ea-99e8-0242ac110002,ResourceVersion:17974,Generation:0,CreationTimestamp:2020-03-15 20:30:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 853666610,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nn25s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nn25s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-nn25s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d19e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d19e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:30:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:30:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:30:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:30:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.28,StartTime:2020-03-15 20:30:32 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-15 20:30:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://771a44541101efbb9e52248fd6088d824e6c41821b60c0d317d81bb565f3df6f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 15 20:30:40.911: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 15 20:30:42.916: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:30:42.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-msm66" for this suite. Mar 15 20:31:22.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:31:22.967: INFO: namespace: e2e-tests-events-msm66, resource: bindings, ignored listing per whitelist Mar 15 20:31:23.037: INFO: namespace e2e-tests-events-msm66 deletion completed in 40.093694302s • [SLOW TEST:50.276 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:31:23.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:31:27.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-9l88t" for this suite. Mar 15 20:32:05.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:32:05.228: INFO: namespace: e2e-tests-kubelet-test-9l88t, resource: bindings, ignored listing per whitelist Mar 15 20:32:05.299: INFO: namespace e2e-tests-kubelet-test-9l88t deletion completed in 38.111502094s • [SLOW TEST:42.262 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:32:05.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-08d4a1de-66fc-11ea-9ccf-0242ac110012 STEP: Creating configMap with name cm-test-opt-upd-08d4a24c-66fc-11ea-9ccf-0242ac110012 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-08d4a1de-66fc-11ea-9ccf-0242ac110012 STEP: Updating configmap cm-test-opt-upd-08d4a24c-66fc-11ea-9ccf-0242ac110012 STEP: Creating configMap with name cm-test-opt-create-08d4a286-66fc-11ea-9ccf-0242ac110012 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:33:27.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9l7kq" for this suite. Mar 15 20:33:49.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:33:49.762: INFO: namespace: e2e-tests-projected-9l7kq, resource: bindings, ignored listing per whitelist Mar 15 20:33:49.913: INFO: namespace e2e-tests-projected-9l7kq deletion completed in 22.173270138s • [SLOW TEST:104.613 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:33:49.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-47711597-66fc-11ea-9ccf-0242ac110012 STEP: Creating configMap with name cm-test-opt-upd-47711610-66fc-11ea-9ccf-0242ac110012 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-47711597-66fc-11ea-9ccf-0242ac110012 STEP: Updating configmap cm-test-opt-upd-47711610-66fc-11ea-9ccf-0242ac110012 STEP: Creating configMap with name cm-test-opt-create-4771163d-66fc-11ea-9ccf-0242ac110012 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:35:10.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rgg48" for this suite. Mar 15 20:35:32.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:35:32.453: INFO: namespace: e2e-tests-configmap-rgg48, resource: bindings, ignored listing per whitelist Mar 15 20:35:32.514: INFO: namespace e2e-tests-configmap-rgg48 deletion completed in 22.098489341s • [SLOW TEST:102.600 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:35:32.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Mar 15 20:35:32.594: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:35:32.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zk66v" for this suite. Mar 15 20:35:38.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:35:38.742: INFO: namespace: e2e-tests-kubectl-zk66v, resource: bindings, ignored listing per whitelist Mar 15 20:35:38.791: INFO: namespace e2e-tests-kubectl-zk66v deletion completed in 6.101076343s • [SLOW TEST:6.278 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:35:38.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-gq7bc [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-gq7bc STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-gq7bc STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-gq7bc STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-gq7bc Mar 15 20:35:42.961: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-gq7bc, name: ss-0, uid: 898459ca-66fc-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Mar 15 20:35:51.245: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-gq7bc, name: ss-0, uid: 898459ca-66fc-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 15 20:35:51.323: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-gq7bc, name: ss-0, uid: 898459ca-66fc-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 15 20:35:51.392: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-gq7bc STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-gq7bc STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-gq7bc and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 15 20:36:01.636: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gq7bc Mar 15 20:36:01.639: INFO: Scaling statefulset ss to 0 Mar 15 20:36:11.657: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 20:36:11.660: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:36:11.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-gq7bc" for this suite. Mar 15 20:36:17.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:36:17.801: INFO: namespace: e2e-tests-statefulset-gq7bc, resource: bindings, ignored listing per whitelist Mar 15 20:36:17.817: INFO: namespace e2e-tests-statefulset-gq7bc deletion completed in 6.141546058s • [SLOW TEST:39.025 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:36:17.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-9f5fbcec-66fc-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 20:36:18.014: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f605404-66fc-11ea-9ccf-0242ac110012" in namespace "e2e-tests-configmap-s84mk" to be "success or failure" Mar 15 20:36:18.024: INFO: Pod "pod-configmaps-9f605404-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 9.444604ms Mar 15 20:36:20.047: INFO: Pod "pod-configmaps-9f605404-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032282964s Mar 15 20:36:22.050: INFO: Pod "pod-configmaps-9f605404-66fc-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 4.036081807s Mar 15 20:36:24.054: INFO: Pod "pod-configmaps-9f605404-66fc-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039886717s STEP: Saw pod success Mar 15 20:36:24.054: INFO: Pod "pod-configmaps-9f605404-66fc-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:36:24.056: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9f605404-66fc-11ea-9ccf-0242ac110012 container configmap-volume-test: STEP: delete the pod Mar 15 20:36:24.072: INFO: Waiting for pod pod-configmaps-9f605404-66fc-11ea-9ccf-0242ac110012 to disappear Mar 15 20:36:24.109: INFO: Pod pod-configmaps-9f605404-66fc-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:36:24.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-s84mk" for this suite. Mar 15 20:36:30.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:36:30.396: INFO: namespace: e2e-tests-configmap-s84mk, resource: bindings, ignored listing per whitelist Mar 15 20:36:30.408: INFO: namespace e2e-tests-configmap-s84mk deletion completed in 6.296660925s • [SLOW TEST:12.591 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:36:30.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Mar 15 20:36:30.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:33.118: INFO: stderr: "" Mar 15 20:36:33.118: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 15 20:36:33.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:33.239: INFO: stderr: "" Mar 15 20:36:33.239: INFO: stdout: "update-demo-nautilus-8tdnj update-demo-nautilus-qlvbl " Mar 15 20:36:33.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tdnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:33.327: INFO: stderr: "" Mar 15 20:36:33.327: INFO: stdout: "" Mar 15 20:36:33.327: INFO: update-demo-nautilus-8tdnj is created but not running Mar 15 20:36:38.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:38.433: INFO: stderr: "" Mar 15 20:36:38.433: INFO: stdout: "update-demo-nautilus-8tdnj update-demo-nautilus-qlvbl " Mar 15 20:36:38.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tdnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:38.609: INFO: stderr: "" Mar 15 20:36:38.609: INFO: stdout: "" Mar 15 20:36:38.609: INFO: update-demo-nautilus-8tdnj is created but not running Mar 15 20:36:43.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:43.711: INFO: stderr: "" Mar 15 20:36:43.711: INFO: stdout: "update-demo-nautilus-8tdnj update-demo-nautilus-qlvbl " Mar 15 20:36:43.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tdnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:43.817: INFO: stderr: "" Mar 15 20:36:43.817: INFO: stdout: "true" Mar 15 20:36:43.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tdnj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:43.906: INFO: stderr: "" Mar 15 20:36:43.906: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 20:36:43.906: INFO: validating pod update-demo-nautilus-8tdnj Mar 15 20:36:43.910: INFO: got data: { "image": "nautilus.jpg" } Mar 15 20:36:43.910: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 20:36:43.910: INFO: update-demo-nautilus-8tdnj is verified up and running Mar 15 20:36:43.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qlvbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:44.005: INFO: stderr: "" Mar 15 20:36:44.005: INFO: stdout: "true" Mar 15 20:36:44.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qlvbl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:36:44.098: INFO: stderr: "" Mar 15 20:36:44.098: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 20:36:44.098: INFO: validating pod update-demo-nautilus-qlvbl Mar 15 20:36:44.102: INFO: got data: { "image": "nautilus.jpg" } Mar 15 20:36:44.102: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 20:36:44.102: INFO: update-demo-nautilus-qlvbl is verified up and running STEP: rolling-update to new replication controller Mar 15 20:36:44.105: INFO: scanned /root for discovery docs: Mar 15 20:36:44.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:37:08.961: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 15 20:37:08.961: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 15 20:37:08.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:37:09.141: INFO: stderr: "" Mar 15 20:37:09.141: INFO: stdout: "update-demo-kitten-96j8p update-demo-kitten-bclnj " Mar 15 20:37:09.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-96j8p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:37:09.238: INFO: stderr: "" Mar 15 20:37:09.238: INFO: stdout: "true" Mar 15 20:37:09.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-96j8p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:37:09.339: INFO: stderr: "" Mar 15 20:37:09.339: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 15 20:37:09.339: INFO: validating pod update-demo-kitten-96j8p Mar 15 20:37:09.343: INFO: got data: { "image": "kitten.jpg" } Mar 15 20:37:09.343: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 15 20:37:09.343: INFO: update-demo-kitten-96j8p is verified up and running Mar 15 20:37:09.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bclnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:37:09.438: INFO: stderr: "" Mar 15 20:37:09.438: INFO: stdout: "true" Mar 15 20:37:09.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bclnj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fn84c' Mar 15 20:37:09.547: INFO: stderr: "" Mar 15 20:37:09.547: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 15 20:37:09.547: INFO: validating pod update-demo-kitten-bclnj Mar 15 20:37:09.551: INFO: got data: { "image": "kitten.jpg" } Mar 15 20:37:09.551: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 15 20:37:09.551: INFO: update-demo-kitten-bclnj is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:37:09.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fn84c" for this suite. Mar 15 20:37:33.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:37:33.581: INFO: namespace: e2e-tests-kubectl-fn84c, resource: bindings, ignored listing per whitelist Mar 15 20:37:33.652: INFO: namespace e2e-tests-kubectl-fn84c deletion completed in 24.096605575s • [SLOW TEST:63.243 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:37:33.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Mar 15 20:37:33.819: INFO: Waiting up to 5m0s for pod "var-expansion-cc920961-66fc-11ea-9ccf-0242ac110012" in namespace "e2e-tests-var-expansion-7qppt" to be "success or failure" Mar 15 20:37:33.827: INFO: Pod "var-expansion-cc920961-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 7.923791ms Mar 15 20:37:35.865: INFO: Pod "var-expansion-cc920961-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045510753s Mar 15 20:37:37.868: INFO: Pod "var-expansion-cc920961-66fc-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049040463s STEP: Saw pod success Mar 15 20:37:37.868: INFO: Pod "var-expansion-cc920961-66fc-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:37:37.871: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-cc920961-66fc-11ea-9ccf-0242ac110012 container dapi-container: STEP: delete the pod Mar 15 20:37:37.926: INFO: Waiting for pod var-expansion-cc920961-66fc-11ea-9ccf-0242ac110012 to disappear Mar 15 20:37:37.941: INFO: Pod var-expansion-cc920961-66fc-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:37:37.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-7qppt" for this suite. Mar 15 20:37:43.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:37:43.974: INFO: namespace: e2e-tests-var-expansion-7qppt, resource: bindings, ignored listing per whitelist Mar 15 20:37:44.034: INFO: namespace e2e-tests-var-expansion-7qppt deletion completed in 6.089814474s • [SLOW TEST:10.382 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:37:44.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-d2bcdd7d-66fc-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 20:37:44.171: INFO: Waiting up to 5m0s for pod "pod-secrets-d2bd55e7-66fc-11ea-9ccf-0242ac110012" in namespace "e2e-tests-secrets-58jcq" to be "success or failure" Mar 15 20:37:44.194: INFO: Pod "pod-secrets-d2bd55e7-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 22.700933ms Mar 15 20:37:46.200: INFO: Pod "pod-secrets-d2bd55e7-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029048949s Mar 15 20:37:48.204: INFO: Pod "pod-secrets-d2bd55e7-66fc-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032427719s STEP: Saw pod success Mar 15 20:37:48.204: INFO: Pod "pod-secrets-d2bd55e7-66fc-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:37:48.206: INFO: Trying to get logs from node hunter-worker pod pod-secrets-d2bd55e7-66fc-11ea-9ccf-0242ac110012 container secret-volume-test: STEP: delete the pod Mar 15 20:37:48.236: INFO: Waiting for pod pod-secrets-d2bd55e7-66fc-11ea-9ccf-0242ac110012 to disappear Mar 15 20:37:48.242: INFO: Pod pod-secrets-d2bd55e7-66fc-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:37:48.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-58jcq" for this suite. Mar 15 20:37:54.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:37:54.267: INFO: namespace: e2e-tests-secrets-58jcq, resource: bindings, ignored listing per whitelist Mar 15 20:37:54.328: INFO: namespace e2e-tests-secrets-58jcq deletion completed in 6.084603084s • [SLOW TEST:10.295 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:37:54.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Mar 15 20:37:54.741: INFO: Waiting up to 5m0s for pod "pod-d903cede-66fc-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-czchs" to be "success or failure" Mar 15 20:37:54.757: INFO: Pod "pod-d903cede-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 16.134227ms Mar 15 20:37:56.842: INFO: Pod "pod-d903cede-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100513315s Mar 15 20:37:58.846: INFO: Pod "pod-d903cede-66fc-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104534604s STEP: Saw pod success Mar 15 20:37:58.846: INFO: Pod "pod-d903cede-66fc-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:37:58.849: INFO: Trying to get logs from node hunter-worker2 pod pod-d903cede-66fc-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:37:58.932: INFO: Waiting for pod pod-d903cede-66fc-11ea-9ccf-0242ac110012 to disappear Mar 15 20:37:58.965: INFO: Pod pod-d903cede-66fc-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:37:58.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-czchs" for this suite. Mar 15 20:38:04.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:38:05.095: INFO: namespace: e2e-tests-emptydir-czchs, resource: bindings, ignored listing per whitelist Mar 15 20:38:05.127: INFO: namespace e2e-tests-emptydir-czchs deletion completed in 6.157775712s • [SLOW TEST:10.798 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:38:05.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 20:38:05.404: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-xlhwm" to be "success or failure" Mar 15 20:38:05.627: INFO: Pod "downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 222.283866ms Mar 15 20:38:07.630: INFO: Pod "downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225994934s Mar 15 20:38:09.650: INFO: Pod "downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245811025s Mar 15 20:38:11.692: INFO: Pod "downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 6.287643374s Mar 15 20:38:13.722: INFO: Pod "downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.317541203s STEP: Saw pod success Mar 15 20:38:13.722: INFO: Pod "downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:38:13.724: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 20:38:14.134: INFO: Waiting for pod downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012 to disappear Mar 15 20:38:14.326: INFO: Pod downwardapi-volume-df65be19-66fc-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:38:14.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xlhwm" for this suite. Mar 15 20:38:20.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:38:20.472: INFO: namespace: e2e-tests-downward-api-xlhwm, resource: bindings, ignored listing per whitelist Mar 15 20:38:20.504: INFO: namespace e2e-tests-downward-api-xlhwm deletion completed in 6.174051021s • [SLOW TEST:15.377 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:38:20.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Mar 15 20:38:20.741: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-vwbhf" to be "success or failure" Mar 15 20:38:20.986: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 244.527261ms Mar 15 20:38:22.990: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248198768s Mar 15 20:38:25.003: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261736366s Mar 15 20:38:27.112: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.370509335s Mar 15 20:38:29.232: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490116406s Mar 15 20:38:31.236: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.494101464s STEP: Saw pod success Mar 15 20:38:31.236: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 15 20:38:31.239: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 15 20:38:31.388: INFO: Waiting for pod pod-host-path-test to disappear Mar 15 20:38:31.410: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:38:31.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-vwbhf" for this suite. Mar 15 20:38:39.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:38:39.816: INFO: namespace: e2e-tests-hostpath-vwbhf, resource: bindings, ignored listing per whitelist Mar 15 20:38:39.854: INFO: namespace e2e-tests-hostpath-vwbhf deletion completed in 8.440002112s • [SLOW TEST:19.350 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:38:39.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 20:38:41.073: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-4z4pd" to be "success or failure" Mar 15 20:38:41.111: INFO: Pod "downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 37.978517ms Mar 15 20:38:43.538: INFO: Pod "downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.46529241s Mar 15 20:38:45.542: INFO: Pod "downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469303987s Mar 15 20:38:47.545: INFO: Pod "downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 6.472043733s Mar 15 20:38:49.549: INFO: Pod "downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.476077017s STEP: Saw pod success Mar 15 20:38:49.549: INFO: Pod "downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:38:49.551: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 20:38:49.606: INFO: Waiting for pod downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012 to disappear Mar 15 20:38:49.716: INFO: Pod downwardapi-volume-f47f6c76-66fc-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:38:49.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4z4pd" for this suite. Mar 15 20:38:55.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:38:55.807: INFO: namespace: e2e-tests-projected-4z4pd, resource: bindings, ignored listing per whitelist Mar 15 20:38:55.840: INFO: namespace e2e-tests-projected-4z4pd deletion completed in 6.121432453s • [SLOW TEST:15.986 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:38:55.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-fd86ca97-66fc-11ea-9ccf-0242ac110012 Mar 15 20:38:55.967: INFO: Pod name my-hostname-basic-fd86ca97-66fc-11ea-9ccf-0242ac110012: Found 0 pods out of 1 Mar 15 20:39:00.971: INFO: Pod name my-hostname-basic-fd86ca97-66fc-11ea-9ccf-0242ac110012: Found 1 pods out of 1 Mar 15 20:39:00.971: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-fd86ca97-66fc-11ea-9ccf-0242ac110012" are running Mar 15 20:39:00.975: INFO: Pod "my-hostname-basic-fd86ca97-66fc-11ea-9ccf-0242ac110012-tgd97" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 20:38:56 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 20:38:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 20:38:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 20:38:55 +0000 UTC Reason: Message:}]) Mar 15 20:39:00.975: INFO: Trying to dial the pod Mar 15 20:39:05.987: INFO: Controller my-hostname-basic-fd86ca97-66fc-11ea-9ccf-0242ac110012: Got expected result from replica 1 [my-hostname-basic-fd86ca97-66fc-11ea-9ccf-0242ac110012-tgd97]: "my-hostname-basic-fd86ca97-66fc-11ea-9ccf-0242ac110012-tgd97", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:39:05.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-75p9c" for this suite. Mar 15 20:39:12.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:39:12.070: INFO: namespace: e2e-tests-replication-controller-75p9c, resource: bindings, ignored listing per whitelist Mar 15 20:39:12.117: INFO: namespace e2e-tests-replication-controller-75p9c deletion completed in 6.12591816s • [SLOW TEST:16.277 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:39:12.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-0739c8d6-66fd-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 20:39:12.270: INFO: Waiting up to 5m0s for pod "pod-configmaps-073a6daf-66fd-11ea-9ccf-0242ac110012" in namespace "e2e-tests-configmap-wfs5m" to be "success or failure" Mar 15 20:39:12.273: INFO: Pod "pod-configmaps-073a6daf-66fd-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.153404ms Mar 15 20:39:14.310: INFO: Pod "pod-configmaps-073a6daf-66fd-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040182604s Mar 15 20:39:16.314: INFO: Pod "pod-configmaps-073a6daf-66fd-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04433619s STEP: Saw pod success Mar 15 20:39:16.314: INFO: Pod "pod-configmaps-073a6daf-66fd-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:39:16.317: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-073a6daf-66fd-11ea-9ccf-0242ac110012 container configmap-volume-test: STEP: delete the pod Mar 15 20:39:16.364: INFO: Waiting for pod pod-configmaps-073a6daf-66fd-11ea-9ccf-0242ac110012 to disappear Mar 15 20:39:16.424: INFO: Pod pod-configmaps-073a6daf-66fd-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:39:16.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wfs5m" for this suite. Mar 15 20:39:22.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:39:22.522: INFO: namespace: e2e-tests-configmap-wfs5m, resource: bindings, ignored listing per whitelist Mar 15 20:39:22.567: INFO: namespace e2e-tests-configmap-wfs5m deletion completed in 6.139200803s • [SLOW TEST:10.450 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:39:22.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-n6kv STEP: Creating a pod to test atomic-volume-subpath Mar 15 20:39:22.785: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n6kv" in namespace "e2e-tests-subpath-q7m6s" to be "success or failure" Mar 15 20:39:23.017: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Pending", Reason="", readiness=false. Elapsed: 232.055487ms Mar 15 20:39:25.020: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235228214s Mar 15 20:39:27.023: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238102862s Mar 15 20:39:29.027: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 6.241614474s Mar 15 20:39:31.029: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 8.244327348s Mar 15 20:39:33.033: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 10.248267666s Mar 15 20:39:35.037: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 12.251539048s Mar 15 20:39:37.039: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 14.254270306s Mar 15 20:39:39.042: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 16.257459941s Mar 15 20:39:41.047: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 18.26156587s Mar 15 20:39:43.051: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 20.265692251s Mar 15 20:39:45.055: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 22.270021473s Mar 15 20:39:47.059: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Running", Reason="", readiness=false. Elapsed: 24.274234866s Mar 15 20:39:49.063: INFO: Pod "pod-subpath-test-configmap-n6kv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.278199199s STEP: Saw pod success Mar 15 20:39:49.063: INFO: Pod "pod-subpath-test-configmap-n6kv" satisfied condition "success or failure" Mar 15 20:39:49.066: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-n6kv container test-container-subpath-configmap-n6kv: STEP: delete the pod Mar 15 20:39:49.106: INFO: Waiting for pod pod-subpath-test-configmap-n6kv to disappear Mar 15 20:39:49.124: INFO: Pod pod-subpath-test-configmap-n6kv no longer exists STEP: Deleting pod pod-subpath-test-configmap-n6kv Mar 15 20:39:49.124: INFO: Deleting pod "pod-subpath-test-configmap-n6kv" in namespace "e2e-tests-subpath-q7m6s" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:39:49.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-q7m6s" for this suite. Mar 15 20:39:55.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:39:55.261: INFO: namespace: e2e-tests-subpath-q7m6s, resource: bindings, ignored listing per whitelist Mar 15 20:39:55.279: INFO: namespace e2e-tests-subpath-q7m6s deletion completed in 6.149800604s • [SLOW TEST:32.712 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:39:55.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 15 20:40:09.579: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 15 20:40:09.682: INFO: Pod pod-with-poststart-http-hook still exists Mar 15 20:40:11.682: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 15 20:40:11.686: INFO: Pod pod-with-poststart-http-hook still exists Mar 15 20:40:13.682: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 15 20:40:13.687: INFO: Pod pod-with-poststart-http-hook still exists Mar 15 20:40:15.682: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 15 20:40:15.686: INFO: Pod pod-with-poststart-http-hook still exists Mar 15 20:40:17.682: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 15 20:40:17.687: INFO: Pod pod-with-poststart-http-hook still exists Mar 15 20:40:19.682: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 15 20:40:19.687: INFO: Pod pod-with-poststart-http-hook still exists Mar 15 20:40:21.682: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 15 20:40:21.687: INFO: Pod pod-with-poststart-http-hook still exists Mar 15 20:40:23.682: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 15 20:40:23.686: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:40:23.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qvbz8" for this suite. Mar 15 20:40:45.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:40:45.733: INFO: namespace: e2e-tests-container-lifecycle-hook-qvbz8, resource: bindings, ignored listing per whitelist Mar 15 20:40:45.778: INFO: namespace e2e-tests-container-lifecycle-hook-qvbz8 deletion completed in 22.087346768s • [SLOW TEST:50.498 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:40:45.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 15 20:40:45.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:46.137: INFO: stderr: "" Mar 15 20:40:46.138: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 15 20:40:46.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:46.235: INFO: stderr: "" Mar 15 20:40:46.236: INFO: stdout: "update-demo-nautilus-7bs74 update-demo-nautilus-cbv5f " Mar 15 20:40:46.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bs74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:46.327: INFO: stderr: "" Mar 15 20:40:46.327: INFO: stdout: "" Mar 15 20:40:46.327: INFO: update-demo-nautilus-7bs74 is created but not running Mar 15 20:40:51.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:51.436: INFO: stderr: "" Mar 15 20:40:51.436: INFO: stdout: "update-demo-nautilus-7bs74 update-demo-nautilus-cbv5f " Mar 15 20:40:51.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bs74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:51.526: INFO: stderr: "" Mar 15 20:40:51.526: INFO: stdout: "true" Mar 15 20:40:51.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bs74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:51.634: INFO: stderr: "" Mar 15 20:40:51.634: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 20:40:51.634: INFO: validating pod update-demo-nautilus-7bs74 Mar 15 20:40:51.639: INFO: got data: { "image": "nautilus.jpg" } Mar 15 20:40:51.639: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 20:40:51.639: INFO: update-demo-nautilus-7bs74 is verified up and running Mar 15 20:40:51.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbv5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:51.746: INFO: stderr: "" Mar 15 20:40:51.746: INFO: stdout: "true" Mar 15 20:40:51.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbv5f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:51.835: INFO: stderr: "" Mar 15 20:40:51.835: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 20:40:51.835: INFO: validating pod update-demo-nautilus-cbv5f Mar 15 20:40:51.840: INFO: got data: { "image": "nautilus.jpg" } Mar 15 20:40:51.840: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 20:40:51.840: INFO: update-demo-nautilus-cbv5f is verified up and running STEP: using delete to clean up resources Mar 15 20:40:51.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:51.948: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 20:40:51.948: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 15 20:40:51.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-wwsnh' Mar 15 20:40:52.047: INFO: stderr: "No resources found.\n" Mar 15 20:40:52.048: INFO: stdout: "" Mar 15 20:40:52.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-wwsnh -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 15 20:40:52.153: INFO: stderr: "" Mar 15 20:40:52.153: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:40:52.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wwsnh" for this suite. Mar 15 20:41:14.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:41:14.267: INFO: namespace: e2e-tests-kubectl-wwsnh, resource: bindings, ignored listing per whitelist Mar 15 20:41:14.272: INFO: namespace e2e-tests-kubectl-wwsnh deletion completed in 22.116500881s • [SLOW TEST:28.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:41:14.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 15 20:41:14.407: INFO: Waiting up to 5m0s for pod "downward-api-50084cc9-66fd-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-nhf7f" to be "success or failure" Mar 15 20:41:14.410: INFO: Pod "downward-api-50084cc9-66fd-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.221813ms Mar 15 20:41:16.414: INFO: Pod "downward-api-50084cc9-66fd-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007258448s Mar 15 20:41:18.418: INFO: Pod "downward-api-50084cc9-66fd-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011267014s STEP: Saw pod success Mar 15 20:41:18.418: INFO: Pod "downward-api-50084cc9-66fd-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:41:18.421: INFO: Trying to get logs from node hunter-worker pod downward-api-50084cc9-66fd-11ea-9ccf-0242ac110012 container dapi-container: STEP: delete the pod Mar 15 20:41:18.592: INFO: Waiting for pod downward-api-50084cc9-66fd-11ea-9ccf-0242ac110012 to disappear Mar 15 20:41:18.647: INFO: Pod downward-api-50084cc9-66fd-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:41:18.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nhf7f" for this suite. Mar 15 20:41:24.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:41:24.975: INFO: namespace: e2e-tests-downward-api-nhf7f, resource: bindings, ignored listing per whitelist Mar 15 20:41:25.011: INFO: namespace e2e-tests-downward-api-nhf7f deletion completed in 6.358458976s • [SLOW TEST:10.738 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:41:25.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 15 20:41:43.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:41:43.261: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:41:45.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:41:45.266: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:41:47.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:41:47.266: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:41:49.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:41:49.524: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:41:51.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:41:51.266: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:41:53.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:41:53.267: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:41:55.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:41:55.265: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:41:57.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:41:57.288: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:41:59.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:41:59.324: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:42:01.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:42:01.294: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 20:42:03.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 20:42:03.319: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:42:03.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wkbl2" for this suite. Mar 15 20:42:25.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:42:25.591: INFO: namespace: e2e-tests-container-lifecycle-hook-wkbl2, resource: bindings, ignored listing per whitelist Mar 15 20:42:25.645: INFO: namespace e2e-tests-container-lifecycle-hook-wkbl2 deletion completed in 22.315640945s • [SLOW TEST:60.634 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:42:25.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-7a938287-66fd-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 20:42:25.775: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a96b843-66fd-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-qtqxf" to be "success or failure" Mar 15 20:42:25.797: INFO: Pod "pod-projected-configmaps-7a96b843-66fd-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 21.515723ms Mar 15 20:42:27.821: INFO: Pod "pod-projected-configmaps-7a96b843-66fd-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045540988s Mar 15 20:42:29.825: INFO: Pod "pod-projected-configmaps-7a96b843-66fd-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049522316s STEP: Saw pod success Mar 15 20:42:29.825: INFO: Pod "pod-projected-configmaps-7a96b843-66fd-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:42:29.828: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-7a96b843-66fd-11ea-9ccf-0242ac110012 container projected-configmap-volume-test: STEP: delete the pod Mar 15 20:42:29.847: INFO: Waiting for pod pod-projected-configmaps-7a96b843-66fd-11ea-9ccf-0242ac110012 to disappear Mar 15 20:42:29.851: INFO: Pod pod-projected-configmaps-7a96b843-66fd-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:42:29.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qtqxf" for this suite. Mar 15 20:42:35.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:42:35.877: INFO: namespace: e2e-tests-projected-qtqxf, resource: bindings, ignored listing per whitelist Mar 15 20:42:35.939: INFO: namespace e2e-tests-projected-qtqxf deletion completed in 6.084626863s • [SLOW TEST:10.293 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:42:35.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gn7lq Mar 15 20:42:42.091: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gn7lq STEP: checking the pod's current state and verifying that restartCount is present Mar 15 20:42:42.095: INFO: Initial restart count of pod liveness-http is 0 Mar 15 20:42:58.128: INFO: Restart count of pod e2e-tests-container-probe-gn7lq/liveness-http is now 1 (16.033300923s elapsed) Mar 15 20:43:16.161: INFO: Restart count of pod e2e-tests-container-probe-gn7lq/liveness-http is now 2 (34.066922033s elapsed) Mar 15 20:43:36.198: INFO: Restart count of pod e2e-tests-container-probe-gn7lq/liveness-http is now 3 (54.103550639s elapsed) Mar 15 20:43:56.240: INFO: Restart count of pod e2e-tests-container-probe-gn7lq/liveness-http is now 4 (1m14.145460441s elapsed) Mar 15 20:44:57.006: INFO: Restart count of pod e2e-tests-container-probe-gn7lq/liveness-http is now 5 (2m14.911011934s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:44:57.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-gn7lq" for this suite. Mar 15 20:45:03.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:45:03.401: INFO: namespace: e2e-tests-container-probe-gn7lq, resource: bindings, ignored listing per whitelist Mar 15 20:45:03.414: INFO: namespace e2e-tests-container-probe-gn7lq deletion completed in 6.380175181s • [SLOW TEST:147.475 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:45:03.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Mar 15 20:45:04.273: INFO: Waiting up to 5m0s for pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k" in namespace "e2e-tests-svcaccounts-tpvnr" to be "success or failure" Mar 15 20:45:04.304: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k": Phase="Pending", Reason="", readiness=false. Elapsed: 30.556979ms Mar 15 20:45:06.776: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502776305s Mar 15 20:45:08.780: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.50698399s Mar 15 20:45:10.832: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558803098s Mar 15 20:45:12.836: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k": Phase="Running", Reason="", readiness=false. Elapsed: 8.562829745s Mar 15 20:45:14.968: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.69493699s STEP: Saw pod success Mar 15 20:45:14.968: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k" satisfied condition "success or failure" Mar 15 20:45:15.141: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k container token-test: STEP: delete the pod Mar 15 20:45:15.310: INFO: Waiting for pod pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k to disappear Mar 15 20:45:15.322: INFO: Pod pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-xvv6k no longer exists STEP: Creating a pod to test consume service account root CA Mar 15 20:45:15.325: INFO: Waiting up to 5m0s for pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp" in namespace "e2e-tests-svcaccounts-tpvnr" to be "success or failure" Mar 15 20:45:15.335: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp": Phase="Pending", Reason="", readiness=false. Elapsed: 9.595271ms Mar 15 20:45:17.465: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140283614s Mar 15 20:45:19.468: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143409734s Mar 15 20:45:21.472: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147176188s Mar 15 20:45:23.537: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.211632793s STEP: Saw pod success Mar 15 20:45:23.537: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp" satisfied condition "success or failure" Mar 15 20:45:23.540: INFO: Trying to get logs from node hunter-worker pod pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp container root-ca-test: STEP: delete the pod Mar 15 20:45:23.843: INFO: Waiting for pod pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp to disappear Mar 15 20:45:23.866: INFO: Pod pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-j86jp no longer exists STEP: Creating a pod to test consume service account namespace Mar 15 20:45:23.871: INFO: Waiting up to 5m0s for pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br" in namespace "e2e-tests-svcaccounts-tpvnr" to be "success or failure" Mar 15 20:45:23.908: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br": Phase="Pending", Reason="", readiness=false. Elapsed: 37.279622ms Mar 15 20:45:25.912: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041370686s Mar 15 20:45:27.916: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045095171s Mar 15 20:45:29.926: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055094183s Mar 15 20:45:32.448: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br": Phase="Pending", Reason="", readiness=false. Elapsed: 8.577472051s Mar 15 20:45:34.452: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.58107883s STEP: Saw pod success Mar 15 20:45:34.452: INFO: Pod "pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br" satisfied condition "success or failure" Mar 15 20:45:34.455: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br container namespace-test: STEP: delete the pod Mar 15 20:45:34.724: INFO: Waiting for pod pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br to disappear Mar 15 20:45:34.778: INFO: Pod pod-service-account-d90fbc8a-66fd-11ea-9ccf-0242ac110012-zm9br no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:45:34.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-tpvnr" for this suite. Mar 15 20:45:40.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:45:40.983: INFO: namespace: e2e-tests-svcaccounts-tpvnr, resource: bindings, ignored listing per whitelist Mar 15 20:45:40.990: INFO: namespace e2e-tests-svcaccounts-tpvnr deletion completed in 6.208973377s • [SLOW TEST:37.576 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:45:40.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-ef02ac54-66fd-11ea-9ccf-0242ac110012 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:45:49.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5q8bx" for this suite. Mar 15 20:46:11.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:46:11.567: INFO: namespace: e2e-tests-configmap-5q8bx, resource: bindings, ignored listing per whitelist Mar 15 20:46:11.573: INFO: namespace e2e-tests-configmap-5q8bx deletion completed in 22.416238252s • [SLOW TEST:30.583 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:46:11.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:47:11.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-gsxkp" for this suite. Mar 15 20:47:37.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:47:37.934: INFO: namespace: e2e-tests-container-probe-gsxkp, resource: bindings, ignored listing per whitelist Mar 15 20:47:37.934: INFO: namespace e2e-tests-container-probe-gsxkp deletion completed in 26.136624425s • [SLOW TEST:86.361 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:47:37.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 15 20:47:38.306: INFO: Waiting up to 5m0s for pod "pod-34d98e9c-66fe-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-z69hf" to be "success or failure" Mar 15 20:47:38.311: INFO: Pod "pod-34d98e9c-66fe-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398827ms Mar 15 20:47:40.315: INFO: Pod "pod-34d98e9c-66fe-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0081546s Mar 15 20:47:42.318: INFO: Pod "pod-34d98e9c-66fe-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01195963s STEP: Saw pod success Mar 15 20:47:42.318: INFO: Pod "pod-34d98e9c-66fe-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:47:42.321: INFO: Trying to get logs from node hunter-worker2 pod pod-34d98e9c-66fe-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:47:42.457: INFO: Waiting for pod pod-34d98e9c-66fe-11ea-9ccf-0242ac110012 to disappear Mar 15 20:47:42.676: INFO: Pod pod-34d98e9c-66fe-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:47:42.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-z69hf" for this suite. Mar 15 20:47:50.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:47:50.713: INFO: namespace: e2e-tests-emptydir-z69hf, resource: bindings, ignored listing per whitelist Mar 15 20:47:50.762: INFO: namespace e2e-tests-emptydir-z69hf deletion completed in 8.083264034s • [SLOW TEST:12.828 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:47:50.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xjqcq STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 15 20:47:51.199: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 15 20:48:19.481: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.50:8080/dial?request=hostName&protocol=udp&host=10.244.1.46&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-xjqcq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 20:48:19.481: INFO: >>> kubeConfig: /root/.kube/config I0315 20:48:19.510418 6 log.go:172] (0xc000a0b970) (0xc0025e86e0) Create stream I0315 20:48:19.510452 6 log.go:172] (0xc000a0b970) (0xc0025e86e0) Stream added, broadcasting: 1 I0315 20:48:19.512273 6 log.go:172] (0xc000a0b970) Reply frame received for 1 I0315 20:48:19.512302 6 log.go:172] (0xc000a0b970) (0xc0022d05a0) Create stream I0315 20:48:19.512325 6 log.go:172] (0xc000a0b970) (0xc0022d05a0) Stream added, broadcasting: 3 I0315 20:48:19.513106 6 log.go:172] (0xc000a0b970) Reply frame received for 3 I0315 20:48:19.513267 6 log.go:172] (0xc000a0b970) (0xc0022d06e0) Create stream I0315 20:48:19.513287 6 log.go:172] (0xc000a0b970) (0xc0022d06e0) Stream added, broadcasting: 5 I0315 20:48:19.514011 6 log.go:172] (0xc000a0b970) Reply frame received for 5 I0315 20:48:19.576208 6 log.go:172] (0xc000a0b970) Data frame received for 3 I0315 20:48:19.576231 6 log.go:172] (0xc0022d05a0) (3) Data frame handling I0315 20:48:19.576242 6 log.go:172] (0xc0022d05a0) (3) Data frame sent I0315 20:48:19.576777 6 log.go:172] (0xc000a0b970) Data frame received for 3 I0315 20:48:19.576800 6 log.go:172] (0xc0022d05a0) (3) Data frame handling I0315 20:48:19.576899 6 log.go:172] (0xc000a0b970) Data frame received for 5 I0315 20:48:19.576953 6 log.go:172] (0xc0022d06e0) (5) Data frame handling I0315 20:48:19.578604 6 log.go:172] (0xc000a0b970) Data frame received for 1 I0315 20:48:19.578632 6 log.go:172] (0xc0025e86e0) (1) Data frame handling I0315 20:48:19.578649 6 log.go:172] (0xc0025e86e0) (1) Data frame sent I0315 20:48:19.578662 6 log.go:172] (0xc000a0b970) (0xc0025e86e0) Stream removed, broadcasting: 1 I0315 20:48:19.578673 6 log.go:172] (0xc000a0b970) Go away received I0315 20:48:19.578802 6 log.go:172] (0xc000a0b970) (0xc0025e86e0) Stream removed, broadcasting: 1 I0315 20:48:19.578817 6 log.go:172] (0xc000a0b970) (0xc0022d05a0) Stream removed, broadcasting: 3 I0315 20:48:19.578833 6 log.go:172] (0xc000a0b970) (0xc0022d06e0) Stream removed, broadcasting: 5 Mar 15 20:48:19.578: INFO: Waiting for endpoints: map[] Mar 15 20:48:19.581: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.50:8080/dial?request=hostName&protocol=udp&host=10.244.2.49&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-xjqcq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 20:48:19.581: INFO: >>> kubeConfig: /root/.kube/config I0315 20:48:19.606246 6 log.go:172] (0xc001176370) (0xc00225a500) Create stream I0315 20:48:19.606283 6 log.go:172] (0xc001176370) (0xc00225a500) Stream added, broadcasting: 1 I0315 20:48:19.608669 6 log.go:172] (0xc001176370) Reply frame received for 1 I0315 20:48:19.608698 6 log.go:172] (0xc001176370) (0xc000e9c000) Create stream I0315 20:48:19.608709 6 log.go:172] (0xc001176370) (0xc000e9c000) Stream added, broadcasting: 3 I0315 20:48:19.609521 6 log.go:172] (0xc001176370) Reply frame received for 3 I0315 20:48:19.609544 6 log.go:172] (0xc001176370) (0xc00225a5a0) Create stream I0315 20:48:19.609555 6 log.go:172] (0xc001176370) (0xc00225a5a0) Stream added, broadcasting: 5 I0315 20:48:19.610263 6 log.go:172] (0xc001176370) Reply frame received for 5 I0315 20:48:19.673037 6 log.go:172] (0xc001176370) Data frame received for 3 I0315 20:48:19.673071 6 log.go:172] (0xc000e9c000) (3) Data frame handling I0315 20:48:19.673093 6 log.go:172] (0xc000e9c000) (3) Data frame sent I0315 20:48:19.673692 6 log.go:172] (0xc001176370) Data frame received for 5 I0315 20:48:19.673716 6 log.go:172] (0xc00225a5a0) (5) Data frame handling I0315 20:48:19.673874 6 log.go:172] (0xc001176370) Data frame received for 3 I0315 20:48:19.673904 6 log.go:172] (0xc000e9c000) (3) Data frame handling I0315 20:48:19.675263 6 log.go:172] (0xc001176370) Data frame received for 1 I0315 20:48:19.675284 6 log.go:172] (0xc00225a500) (1) Data frame handling I0315 20:48:19.675299 6 log.go:172] (0xc00225a500) (1) Data frame sent I0315 20:48:19.675359 6 log.go:172] (0xc001176370) (0xc00225a500) Stream removed, broadcasting: 1 I0315 20:48:19.675409 6 log.go:172] (0xc001176370) Go away received I0315 20:48:19.675469 6 log.go:172] (0xc001176370) (0xc00225a500) Stream removed, broadcasting: 1 I0315 20:48:19.675498 6 log.go:172] (0xc001176370) (0xc000e9c000) Stream removed, broadcasting: 3 I0315 20:48:19.675524 6 log.go:172] (0xc001176370) (0xc00225a5a0) Stream removed, broadcasting: 5 Mar 15 20:48:19.675: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:48:19.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xjqcq" for this suite. Mar 15 20:48:41.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:48:41.983: INFO: namespace: e2e-tests-pod-network-test-xjqcq, resource: bindings, ignored listing per whitelist Mar 15 20:48:42.001: INFO: namespace e2e-tests-pod-network-test-xjqcq deletion completed in 22.321809326s • [SLOW TEST:51.238 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:48:42.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 20:48:42.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-qhvpn' Mar 15 20:48:46.001: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 15 20:48:46.001: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 15 20:48:46.048: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-6xrvt] Mar 15 20:48:46.048: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-6xrvt" in namespace "e2e-tests-kubectl-qhvpn" to be "running and ready" Mar 15 20:48:46.067: INFO: Pod "e2e-test-nginx-rc-6xrvt": Phase="Pending", Reason="", readiness=false. Elapsed: 18.987617ms Mar 15 20:48:48.071: INFO: Pod "e2e-test-nginx-rc-6xrvt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022766844s Mar 15 20:48:50.075: INFO: Pod "e2e-test-nginx-rc-6xrvt": Phase="Running", Reason="", readiness=true. Elapsed: 4.02655937s Mar 15 20:48:50.075: INFO: Pod "e2e-test-nginx-rc-6xrvt" satisfied condition "running and ready" Mar 15 20:48:50.075: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-6xrvt] Mar 15 20:48:50.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qhvpn' Mar 15 20:48:50.190: INFO: stderr: "" Mar 15 20:48:50.190: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Mar 15 20:48:50.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qhvpn' Mar 15 20:48:50.303: INFO: stderr: "" Mar 15 20:48:50.303: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:48:50.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qhvpn" for this suite. Mar 15 20:49:14.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:49:14.413: INFO: namespace: e2e-tests-kubectl-qhvpn, resource: bindings, ignored listing per whitelist Mar 15 20:49:14.444: INFO: namespace e2e-tests-kubectl-qhvpn deletion completed in 24.137600552s • [SLOW TEST:32.443 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:49:14.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-zn24m [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Mar 15 20:49:15.125: INFO: Found 0 stateful pods, waiting for 3 Mar 15 20:49:25.151: INFO: Found 1 stateful pods, waiting for 3 Mar 15 20:49:35.131: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:49:35.131: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:49:35.131: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 15 20:49:45.130: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:49:45.130: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:49:45.130: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 15 20:49:45.157: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 15 20:49:55.223: INFO: Updating stateful set ss2 Mar 15 20:49:55.371: INFO: Waiting for Pod e2e-tests-statefulset-zn24m/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 15 20:50:06.065: INFO: Found 2 stateful pods, waiting for 3 Mar 15 20:50:16.070: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:50:16.070: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:50:16.070: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 15 20:50:16.095: INFO: Updating stateful set ss2 Mar 15 20:50:16.101: INFO: Waiting for Pod e2e-tests-statefulset-zn24m/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 20:50:26.122: INFO: Updating stateful set ss2 Mar 15 20:50:26.173: INFO: Waiting for StatefulSet e2e-tests-statefulset-zn24m/ss2 to complete update Mar 15 20:50:26.173: INFO: Waiting for Pod e2e-tests-statefulset-zn24m/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 15 20:50:36.180: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zn24m Mar 15 20:50:36.182: INFO: Scaling statefulset ss2 to 0 Mar 15 20:50:56.389: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 20:50:56.392: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:50:56.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-zn24m" for this suite. Mar 15 20:51:04.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:51:04.516: INFO: namespace: e2e-tests-statefulset-zn24m, resource: bindings, ignored listing per whitelist Mar 15 20:51:04.526: INFO: namespace e2e-tests-statefulset-zn24m deletion completed in 8.105551802s • [SLOW TEST:110.081 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:51:04.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Mar 15 20:51:04.716: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 15 20:51:04.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:05.079: INFO: stderr: "" Mar 15 20:51:05.079: INFO: stdout: "service/redis-slave created\n" Mar 15 20:51:05.079: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 15 20:51:05.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:05.924: INFO: stderr: "" Mar 15 20:51:05.924: INFO: stdout: "service/redis-master created\n" Mar 15 20:51:05.924: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 15 20:51:05.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:06.756: INFO: stderr: "" Mar 15 20:51:06.756: INFO: stdout: "service/frontend created\n" Mar 15 20:51:06.756: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 15 20:51:06.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:07.006: INFO: stderr: "" Mar 15 20:51:07.006: INFO: stdout: "deployment.extensions/frontend created\n" Mar 15 20:51:07.006: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 15 20:51:07.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:07.755: INFO: stderr: "" Mar 15 20:51:07.755: INFO: stdout: "deployment.extensions/redis-master created\n" Mar 15 20:51:07.755: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 15 20:51:07.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:08.207: INFO: stderr: "" Mar 15 20:51:08.207: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Mar 15 20:51:08.207: INFO: Waiting for all frontend pods to be Running. Mar 15 20:51:18.258: INFO: Waiting for frontend to serve content. Mar 15 20:51:18.275: INFO: Trying to add a new entry to the guestbook. Mar 15 20:51:18.290: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 15 20:51:18.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:18.569: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 20:51:18.569: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 15 20:51:18.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:18.990: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 20:51:18.990: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 15 20:51:18.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:19.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 20:51:19.115: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 15 20:51:19.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:19.290: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 20:51:19.290: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 15 20:51:19.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:19.664: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 20:51:19.664: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 15 20:51:19.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nlrsj' Mar 15 20:51:20.060: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 20:51:20.060: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:51:20.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nlrsj" for this suite. Mar 15 20:52:02.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:52:02.360: INFO: namespace: e2e-tests-kubectl-nlrsj, resource: bindings, ignored listing per whitelist Mar 15 20:52:02.406: INFO: namespace e2e-tests-kubectl-nlrsj deletion completed in 42.319585121s • [SLOW TEST:57.880 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:52:02.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Mar 15 20:52:08.648: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:52:32.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-q7dwm" for this suite. Mar 15 20:52:38.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:52:38.792: INFO: namespace: e2e-tests-namespaces-q7dwm, resource: bindings, ignored listing per whitelist Mar 15 20:52:38.846: INFO: namespace e2e-tests-namespaces-q7dwm deletion completed in 6.096983676s STEP: Destroying namespace "e2e-tests-nsdeletetest-m4sx6" for this suite. Mar 15 20:52:38.849: INFO: Namespace e2e-tests-nsdeletetest-m4sx6 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-kgvpd" for this suite. Mar 15 20:52:44.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:52:44.914: INFO: namespace: e2e-tests-nsdeletetest-kgvpd, resource: bindings, ignored listing per whitelist Mar 15 20:52:44.945: INFO: namespace e2e-tests-nsdeletetest-kgvpd deletion completed in 6.096012173s • [SLOW TEST:42.539 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:52:44.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-mj7tv [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-mj7tv STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-mj7tv Mar 15 20:52:45.061: INFO: Found 0 stateful pods, waiting for 1 Mar 15 20:52:55.066: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 15 20:52:55.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mj7tv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:52:55.347: INFO: stderr: "I0315 20:52:55.207508 1412 log.go:172] (0xc0001546e0) (0xc00061f4a0) Create stream\nI0315 20:52:55.207575 1412 log.go:172] (0xc0001546e0) (0xc00061f4a0) Stream added, broadcasting: 1\nI0315 20:52:55.211081 1412 log.go:172] (0xc0001546e0) Reply frame received for 1\nI0315 20:52:55.211132 1412 log.go:172] (0xc0001546e0) (0xc0002fa000) Create stream\nI0315 20:52:55.211146 1412 log.go:172] (0xc0001546e0) (0xc0002fa000) Stream added, broadcasting: 3\nI0315 20:52:55.212176 1412 log.go:172] (0xc0001546e0) Reply frame received for 3\nI0315 20:52:55.212221 1412 log.go:172] (0xc0001546e0) (0xc000398000) Create stream\nI0315 20:52:55.212231 1412 log.go:172] (0xc0001546e0) (0xc000398000) Stream added, broadcasting: 5\nI0315 20:52:55.213364 1412 log.go:172] (0xc0001546e0) Reply frame received for 5\nI0315 20:52:55.341484 1412 log.go:172] (0xc0001546e0) Data frame received for 5\nI0315 20:52:55.341670 1412 log.go:172] (0xc000398000) (5) Data frame handling\nI0315 20:52:55.341811 1412 log.go:172] (0xc0001546e0) Data frame received for 3\nI0315 20:52:55.341845 1412 log.go:172] (0xc0002fa000) (3) Data frame handling\nI0315 20:52:55.341878 1412 log.go:172] (0xc0002fa000) (3) Data frame sent\nI0315 20:52:55.341895 1412 log.go:172] (0xc0001546e0) Data frame received for 3\nI0315 20:52:55.341921 1412 log.go:172] (0xc0002fa000) (3) Data frame handling\nI0315 20:52:55.344134 1412 log.go:172] (0xc0001546e0) Data frame received for 1\nI0315 20:52:55.344163 1412 log.go:172] (0xc00061f4a0) (1) Data frame handling\nI0315 20:52:55.344183 1412 log.go:172] (0xc00061f4a0) (1) Data frame sent\nI0315 20:52:55.344200 1412 log.go:172] (0xc0001546e0) (0xc00061f4a0) Stream removed, broadcasting: 1\nI0315 20:52:55.344366 1412 log.go:172] (0xc0001546e0) Go away received\nI0315 20:52:55.344391 1412 log.go:172] (0xc0001546e0) (0xc00061f4a0) Stream removed, broadcasting: 1\nI0315 20:52:55.344427 1412 log.go:172] (0xc0001546e0) (0xc0002fa000) Stream removed, broadcasting: 3\nI0315 20:52:55.344444 1412 log.go:172] (0xc0001546e0) (0xc000398000) Stream removed, broadcasting: 5\n" Mar 15 20:52:55.347: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:52:55.347: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 20:52:55.350: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 15 20:53:05.354: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 15 20:53:05.354: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 20:53:05.381: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 20:53:05.381: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC }] Mar 15 20:53:05.382: INFO: Mar 15 20:53:05.382: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 15 20:53:06.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980213746s Mar 15 20:53:07.391: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97476541s Mar 15 20:53:08.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970353796s Mar 15 20:53:09.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.9660742s Mar 15 20:53:10.404: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.962326753s Mar 15 20:53:11.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.956948718s Mar 15 20:53:12.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.951960812s Mar 15 20:53:13.419: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.946949295s Mar 15 20:53:14.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 942.25073ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-mj7tv Mar 15 20:53:15.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mj7tv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 20:53:15.645: INFO: stderr: "I0315 20:53:15.562375 1434 log.go:172] (0xc000138630) (0xc00001d5e0) Create stream\nI0315 20:53:15.562435 1434 log.go:172] (0xc000138630) (0xc00001d5e0) Stream added, broadcasting: 1\nI0315 20:53:15.564648 1434 log.go:172] (0xc000138630) Reply frame received for 1\nI0315 20:53:15.564693 1434 log.go:172] (0xc000138630) (0xc000270000) Create stream\nI0315 20:53:15.564709 1434 log.go:172] (0xc000138630) (0xc000270000) Stream added, broadcasting: 3\nI0315 20:53:15.565590 1434 log.go:172] (0xc000138630) Reply frame received for 3\nI0315 20:53:15.565625 1434 log.go:172] (0xc000138630) (0xc00001d680) Create stream\nI0315 20:53:15.565632 1434 log.go:172] (0xc000138630) (0xc00001d680) Stream added, broadcasting: 5\nI0315 20:53:15.566343 1434 log.go:172] (0xc000138630) Reply frame received for 5\nI0315 20:53:15.640137 1434 log.go:172] (0xc000138630) Data frame received for 5\nI0315 20:53:15.640193 1434 log.go:172] (0xc000138630) Data frame received for 3\nI0315 20:53:15.640239 1434 log.go:172] (0xc000270000) (3) Data frame handling\nI0315 20:53:15.640264 1434 log.go:172] (0xc000270000) (3) Data frame sent\nI0315 20:53:15.640282 1434 log.go:172] (0xc000138630) Data frame received for 3\nI0315 20:53:15.640299 1434 log.go:172] (0xc000270000) (3) Data frame handling\nI0315 20:53:15.640320 1434 log.go:172] (0xc00001d680) (5) Data frame handling\nI0315 20:53:15.642007 1434 log.go:172] (0xc000138630) Data frame received for 1\nI0315 20:53:15.642035 1434 log.go:172] (0xc00001d5e0) (1) Data frame handling\nI0315 20:53:15.642049 1434 log.go:172] (0xc00001d5e0) (1) Data frame sent\nI0315 20:53:15.642063 1434 log.go:172] (0xc000138630) (0xc00001d5e0) Stream removed, broadcasting: 1\nI0315 20:53:15.642090 1434 log.go:172] (0xc000138630) Go away received\nI0315 20:53:15.642332 1434 log.go:172] (0xc000138630) (0xc00001d5e0) Stream removed, broadcasting: 1\nI0315 20:53:15.642356 1434 log.go:172] (0xc000138630) (0xc000270000) Stream removed, broadcasting: 3\nI0315 20:53:15.642368 1434 log.go:172] (0xc000138630) (0xc00001d680) Stream removed, broadcasting: 5\n" Mar 15 20:53:15.645: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 20:53:15.645: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 20:53:15.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mj7tv ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 20:53:15.845: INFO: stderr: "I0315 20:53:15.769028 1457 log.go:172] (0xc0008642c0) (0xc00063f360) Create stream\nI0315 20:53:15.769088 1457 log.go:172] (0xc0008642c0) (0xc00063f360) Stream added, broadcasting: 1\nI0315 20:53:15.771481 1457 log.go:172] (0xc0008642c0) Reply frame received for 1\nI0315 20:53:15.771630 1457 log.go:172] (0xc0008642c0) (0xc00062c000) Create stream\nI0315 20:53:15.771647 1457 log.go:172] (0xc0008642c0) (0xc00062c000) Stream added, broadcasting: 3\nI0315 20:53:15.772555 1457 log.go:172] (0xc0008642c0) Reply frame received for 3\nI0315 20:53:15.772597 1457 log.go:172] (0xc0008642c0) (0xc00063f400) Create stream\nI0315 20:53:15.772611 1457 log.go:172] (0xc0008642c0) (0xc00063f400) Stream added, broadcasting: 5\nI0315 20:53:15.773639 1457 log.go:172] (0xc0008642c0) Reply frame received for 5\nI0315 20:53:15.839388 1457 log.go:172] (0xc0008642c0) Data frame received for 5\nI0315 20:53:15.839440 1457 log.go:172] (0xc00063f400) (5) Data frame handling\nI0315 20:53:15.839463 1457 log.go:172] (0xc00063f400) (5) Data frame sent\nI0315 20:53:15.839481 1457 log.go:172] (0xc0008642c0) Data frame received for 5\nI0315 20:53:15.839496 1457 log.go:172] (0xc00063f400) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0315 20:53:15.839522 1457 log.go:172] (0xc0008642c0) Data frame received for 3\nI0315 20:53:15.839560 1457 log.go:172] (0xc00062c000) (3) Data frame handling\nI0315 20:53:15.839596 1457 log.go:172] (0xc00062c000) (3) Data frame sent\nI0315 20:53:15.839613 1457 log.go:172] (0xc0008642c0) Data frame received for 3\nI0315 20:53:15.839623 1457 log.go:172] (0xc00062c000) (3) Data frame handling\nI0315 20:53:15.841993 1457 log.go:172] (0xc0008642c0) Data frame received for 1\nI0315 20:53:15.842014 1457 log.go:172] (0xc00063f360) (1) Data frame handling\nI0315 20:53:15.842025 1457 log.go:172] (0xc00063f360) (1) Data frame sent\nI0315 20:53:15.842052 1457 log.go:172] (0xc0008642c0) (0xc00063f360) Stream removed, broadcasting: 1\nI0315 20:53:15.842075 1457 log.go:172] (0xc0008642c0) Go away received\nI0315 20:53:15.842362 1457 log.go:172] (0xc0008642c0) (0xc00063f360) Stream removed, broadcasting: 1\nI0315 20:53:15.842395 1457 log.go:172] (0xc0008642c0) (0xc00062c000) Stream removed, broadcasting: 3\nI0315 20:53:15.842417 1457 log.go:172] (0xc0008642c0) (0xc00063f400) Stream removed, broadcasting: 5\n" Mar 15 20:53:15.845: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 20:53:15.845: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 20:53:15.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mj7tv ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 20:53:16.044: INFO: stderr: "I0315 20:53:15.975034 1479 log.go:172] (0xc000138790) (0xc0008be5a0) Create stream\nI0315 20:53:15.975133 1479 log.go:172] (0xc000138790) (0xc0008be5a0) Stream added, broadcasting: 1\nI0315 20:53:15.984663 1479 log.go:172] (0xc000138790) Reply frame received for 1\nI0315 20:53:15.984724 1479 log.go:172] (0xc000138790) (0xc0005ded20) Create stream\nI0315 20:53:15.984737 1479 log.go:172] (0xc000138790) (0xc0005ded20) Stream added, broadcasting: 3\nI0315 20:53:15.986994 1479 log.go:172] (0xc000138790) Reply frame received for 3\nI0315 20:53:15.987018 1479 log.go:172] (0xc000138790) (0xc0008be640) Create stream\nI0315 20:53:15.987026 1479 log.go:172] (0xc000138790) (0xc0008be640) Stream added, broadcasting: 5\nI0315 20:53:15.988772 1479 log.go:172] (0xc000138790) Reply frame received for 5\nI0315 20:53:16.039331 1479 log.go:172] (0xc000138790) Data frame received for 5\nI0315 20:53:16.039354 1479 log.go:172] (0xc0008be640) (5) Data frame handling\nI0315 20:53:16.039378 1479 log.go:172] (0xc000138790) Data frame received for 3\nmv: can't rename '/tmp/index.html': No such file or directory\nI0315 20:53:16.039419 1479 log.go:172] (0xc0005ded20) (3) Data frame handling\nI0315 20:53:16.039440 1479 log.go:172] (0xc0005ded20) (3) Data frame sent\nI0315 20:53:16.039454 1479 log.go:172] (0xc000138790) Data frame received for 3\nI0315 20:53:16.039461 1479 log.go:172] (0xc0005ded20) (3) Data frame handling\nI0315 20:53:16.039499 1479 log.go:172] (0xc0008be640) (5) Data frame sent\nI0315 20:53:16.039531 1479 log.go:172] (0xc000138790) Data frame received for 5\nI0315 20:53:16.039546 1479 log.go:172] (0xc0008be640) (5) Data frame handling\nI0315 20:53:16.041294 1479 log.go:172] (0xc000138790) Data frame received for 1\nI0315 20:53:16.041322 1479 log.go:172] (0xc0008be5a0) (1) Data frame handling\nI0315 20:53:16.041332 1479 log.go:172] (0xc0008be5a0) (1) Data frame sent\nI0315 20:53:16.041341 1479 log.go:172] (0xc000138790) (0xc0008be5a0) Stream removed, broadcasting: 1\nI0315 20:53:16.041353 1479 log.go:172] (0xc000138790) Go away received\nI0315 20:53:16.041672 1479 log.go:172] (0xc000138790) (0xc0008be5a0) Stream removed, broadcasting: 1\nI0315 20:53:16.041698 1479 log.go:172] (0xc000138790) (0xc0005ded20) Stream removed, broadcasting: 3\nI0315 20:53:16.041711 1479 log.go:172] (0xc000138790) (0xc0008be640) Stream removed, broadcasting: 5\n" Mar 15 20:53:16.044: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 20:53:16.044: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 20:53:16.048: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 15 20:53:26.054: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:53:26.054: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:53:26.054: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 15 20:53:26.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mj7tv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:53:26.261: INFO: stderr: "I0315 20:53:26.194314 1503 log.go:172] (0xc000154840) (0xc00075c640) Create stream\nI0315 20:53:26.194379 1503 log.go:172] (0xc000154840) (0xc00075c640) Stream added, broadcasting: 1\nI0315 20:53:26.197093 1503 log.go:172] (0xc000154840) Reply frame received for 1\nI0315 20:53:26.197238 1503 log.go:172] (0xc000154840) (0xc000534dc0) Create stream\nI0315 20:53:26.197262 1503 log.go:172] (0xc000154840) (0xc000534dc0) Stream added, broadcasting: 3\nI0315 20:53:26.198375 1503 log.go:172] (0xc000154840) Reply frame received for 3\nI0315 20:53:26.198420 1503 log.go:172] (0xc000154840) (0xc00075c6e0) Create stream\nI0315 20:53:26.198434 1503 log.go:172] (0xc000154840) (0xc00075c6e0) Stream added, broadcasting: 5\nI0315 20:53:26.199328 1503 log.go:172] (0xc000154840) Reply frame received for 5\nI0315 20:53:26.255878 1503 log.go:172] (0xc000154840) Data frame received for 5\nI0315 20:53:26.255916 1503 log.go:172] (0xc00075c6e0) (5) Data frame handling\nI0315 20:53:26.255945 1503 log.go:172] (0xc000154840) Data frame received for 3\nI0315 20:53:26.255956 1503 log.go:172] (0xc000534dc0) (3) Data frame handling\nI0315 20:53:26.255966 1503 log.go:172] (0xc000534dc0) (3) Data frame sent\nI0315 20:53:26.255992 1503 log.go:172] (0xc000154840) Data frame received for 3\nI0315 20:53:26.256015 1503 log.go:172] (0xc000534dc0) (3) Data frame handling\nI0315 20:53:26.257890 1503 log.go:172] (0xc000154840) Data frame received for 1\nI0315 20:53:26.257910 1503 log.go:172] (0xc00075c640) (1) Data frame handling\nI0315 20:53:26.257921 1503 log.go:172] (0xc00075c640) (1) Data frame sent\nI0315 20:53:26.257933 1503 log.go:172] (0xc000154840) (0xc00075c640) Stream removed, broadcasting: 1\nI0315 20:53:26.258002 1503 log.go:172] (0xc000154840) Go away received\nI0315 20:53:26.258132 1503 log.go:172] (0xc000154840) (0xc00075c640) Stream removed, broadcasting: 1\nI0315 20:53:26.258150 1503 log.go:172] (0xc000154840) (0xc000534dc0) Stream removed, broadcasting: 3\nI0315 20:53:26.258164 1503 log.go:172] (0xc000154840) (0xc00075c6e0) Stream removed, broadcasting: 5\n" Mar 15 20:53:26.261: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:53:26.261: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 20:53:26.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mj7tv ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:53:26.485: INFO: stderr: "I0315 20:53:26.393523 1526 log.go:172] (0xc00070c370) (0xc00065b360) Create stream\nI0315 20:53:26.393579 1526 log.go:172] (0xc00070c370) (0xc00065b360) Stream added, broadcasting: 1\nI0315 20:53:26.396153 1526 log.go:172] (0xc00070c370) Reply frame received for 1\nI0315 20:53:26.396231 1526 log.go:172] (0xc00070c370) (0xc000718000) Create stream\nI0315 20:53:26.396252 1526 log.go:172] (0xc00070c370) (0xc000718000) Stream added, broadcasting: 3\nI0315 20:53:26.397465 1526 log.go:172] (0xc00070c370) Reply frame received for 3\nI0315 20:53:26.397503 1526 log.go:172] (0xc00070c370) (0xc00076a000) Create stream\nI0315 20:53:26.397519 1526 log.go:172] (0xc00070c370) (0xc00076a000) Stream added, broadcasting: 5\nI0315 20:53:26.398485 1526 log.go:172] (0xc00070c370) Reply frame received for 5\nI0315 20:53:26.479607 1526 log.go:172] (0xc00070c370) Data frame received for 3\nI0315 20:53:26.479638 1526 log.go:172] (0xc000718000) (3) Data frame handling\nI0315 20:53:26.479784 1526 log.go:172] (0xc000718000) (3) Data frame sent\nI0315 20:53:26.479877 1526 log.go:172] (0xc00070c370) Data frame received for 3\nI0315 20:53:26.479887 1526 log.go:172] (0xc000718000) (3) Data frame handling\nI0315 20:53:26.480099 1526 log.go:172] (0xc00070c370) Data frame received for 5\nI0315 20:53:26.480127 1526 log.go:172] (0xc00076a000) (5) Data frame handling\nI0315 20:53:26.481786 1526 log.go:172] (0xc00070c370) Data frame received for 1\nI0315 20:53:26.481800 1526 log.go:172] (0xc00065b360) (1) Data frame handling\nI0315 20:53:26.481808 1526 log.go:172] (0xc00065b360) (1) Data frame sent\nI0315 20:53:26.481815 1526 log.go:172] (0xc00070c370) (0xc00065b360) Stream removed, broadcasting: 1\nI0315 20:53:26.481959 1526 log.go:172] (0xc00070c370) (0xc00065b360) Stream removed, broadcasting: 1\nI0315 20:53:26.481998 1526 log.go:172] (0xc00070c370) (0xc000718000) Stream removed, broadcasting: 3\nI0315 20:53:26.482005 1526 log.go:172] (0xc00070c370) (0xc00076a000) Stream removed, broadcasting: 5\nI0315 20:53:26.482022 1526 log.go:172] (0xc00070c370) Go away received\n" Mar 15 20:53:26.485: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:53:26.485: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 20:53:26.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mj7tv ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:53:26.742: INFO: stderr: "I0315 20:53:26.624474 1549 log.go:172] (0xc00014c840) (0xc000734640) Create stream\nI0315 20:53:26.624539 1549 log.go:172] (0xc00014c840) (0xc000734640) Stream added, broadcasting: 1\nI0315 20:53:26.627078 1549 log.go:172] (0xc00014c840) Reply frame received for 1\nI0315 20:53:26.627122 1549 log.go:172] (0xc00014c840) (0xc0007346e0) Create stream\nI0315 20:53:26.627136 1549 log.go:172] (0xc00014c840) (0xc0007346e0) Stream added, broadcasting: 3\nI0315 20:53:26.628233 1549 log.go:172] (0xc00014c840) Reply frame received for 3\nI0315 20:53:26.628281 1549 log.go:172] (0xc00014c840) (0xc0005eadc0) Create stream\nI0315 20:53:26.628297 1549 log.go:172] (0xc00014c840) (0xc0005eadc0) Stream added, broadcasting: 5\nI0315 20:53:26.629560 1549 log.go:172] (0xc00014c840) Reply frame received for 5\nI0315 20:53:26.735428 1549 log.go:172] (0xc00014c840) Data frame received for 3\nI0315 20:53:26.735449 1549 log.go:172] (0xc0007346e0) (3) Data frame handling\nI0315 20:53:26.735456 1549 log.go:172] (0xc0007346e0) (3) Data frame sent\nI0315 20:53:26.735807 1549 log.go:172] (0xc00014c840) Data frame received for 3\nI0315 20:53:26.735844 1549 log.go:172] (0xc0007346e0) (3) Data frame handling\nI0315 20:53:26.735966 1549 log.go:172] (0xc00014c840) Data frame received for 5\nI0315 20:53:26.735977 1549 log.go:172] (0xc0005eadc0) (5) Data frame handling\nI0315 20:53:26.738341 1549 log.go:172] (0xc00014c840) Data frame received for 1\nI0315 20:53:26.738384 1549 log.go:172] (0xc000734640) (1) Data frame handling\nI0315 20:53:26.738420 1549 log.go:172] (0xc000734640) (1) Data frame sent\nI0315 20:53:26.738450 1549 log.go:172] (0xc00014c840) (0xc000734640) Stream removed, broadcasting: 1\nI0315 20:53:26.738486 1549 log.go:172] (0xc00014c840) Go away received\nI0315 20:53:26.738749 1549 log.go:172] (0xc00014c840) (0xc000734640) Stream removed, broadcasting: 1\nI0315 20:53:26.738783 1549 log.go:172] (0xc00014c840) (0xc0007346e0) Stream removed, broadcasting: 3\nI0315 20:53:26.738796 1549 log.go:172] (0xc00014c840) (0xc0005eadc0) Stream removed, broadcasting: 5\n" Mar 15 20:53:26.742: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:53:26.742: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 20:53:26.742: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 20:53:26.745: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 15 20:53:36.753: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 15 20:53:36.753: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 15 20:53:36.753: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 15 20:53:36.783: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 20:53:36.783: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC }] Mar 15 20:53:36.783: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC }] Mar 15 20:53:36.783: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC }] Mar 15 20:53:36.783: INFO: Mar 15 20:53:36.783: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 15 20:53:37.820: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 20:53:37.820: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC }] Mar 15 20:53:37.820: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC }] Mar 15 20:53:37.820: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC }] Mar 15 20:53:37.820: INFO: Mar 15 20:53:37.820: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 15 20:53:38.825: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 20:53:38.825: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC }] Mar 15 20:53:38.826: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC }] Mar 15 20:53:38.826: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC }] Mar 15 20:53:38.826: INFO: Mar 15 20:53:38.826: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 15 20:53:39.867: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 20:53:39.867: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:52:45 +0000 UTC }] Mar 15 20:53:39.867: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 20:53:05 +0000 UTC }] Mar 15 20:53:39.867: INFO: Mar 15 20:53:39.867: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 15 20:53:40.871: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.891416234s Mar 15 20:53:41.884: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.887234662s Mar 15 20:53:42.889: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.874362652s Mar 15 20:53:43.892: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.869974395s Mar 15 20:53:44.895: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.866612094s Mar 15 20:53:45.899: INFO: Verifying statefulset ss doesn't scale past 0 for another 863.665937ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-mj7tv Mar 15 20:53:46.903: INFO: Scaling statefulset ss to 0 Mar 15 20:53:46.913: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 15 20:53:46.916: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mj7tv Mar 15 20:53:46.919: INFO: Scaling statefulset ss to 0 Mar 15 20:53:46.927: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 20:53:46.929: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:53:46.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-mj7tv" for this suite. Mar 15 20:53:52.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:53:53.052: INFO: namespace: e2e-tests-statefulset-mj7tv, resource: bindings, ignored listing per whitelist Mar 15 20:53:53.083: INFO: namespace e2e-tests-statefulset-mj7tv deletion completed in 6.108638439s • [SLOW TEST:68.138 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:53:53.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-njtbs [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-njtbs STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-njtbs Mar 15 20:53:53.252: INFO: Found 0 stateful pods, waiting for 1 Mar 15 20:54:03.256: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 15 20:54:03.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-njtbs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:54:03.998: INFO: stderr: "I0315 20:54:03.369899 1571 log.go:172] (0xc0006a62c0) (0xc0007b9360) Create stream\nI0315 20:54:03.369955 1571 log.go:172] (0xc0006a62c0) (0xc0007b9360) Stream added, broadcasting: 1\nI0315 20:54:03.371798 1571 log.go:172] (0xc0006a62c0) Reply frame received for 1\nI0315 20:54:03.371829 1571 log.go:172] (0xc0006a62c0) (0xc0007b9400) Create stream\nI0315 20:54:03.371837 1571 log.go:172] (0xc0006a62c0) (0xc0007b9400) Stream added, broadcasting: 3\nI0315 20:54:03.372577 1571 log.go:172] (0xc0006a62c0) Reply frame received for 3\nI0315 20:54:03.372622 1571 log.go:172] (0xc0006a62c0) (0xc0001fc000) Create stream\nI0315 20:54:03.372636 1571 log.go:172] (0xc0006a62c0) (0xc0001fc000) Stream added, broadcasting: 5\nI0315 20:54:03.373378 1571 log.go:172] (0xc0006a62c0) Reply frame received for 5\nI0315 20:54:03.993636 1571 log.go:172] (0xc0006a62c0) Data frame received for 3\nI0315 20:54:03.993670 1571 log.go:172] (0xc0007b9400) (3) Data frame handling\nI0315 20:54:03.993698 1571 log.go:172] (0xc0007b9400) (3) Data frame sent\nI0315 20:54:03.994048 1571 log.go:172] (0xc0006a62c0) Data frame received for 5\nI0315 20:54:03.994064 1571 log.go:172] (0xc0001fc000) (5) Data frame handling\nI0315 20:54:03.994076 1571 log.go:172] (0xc0006a62c0) Data frame received for 3\nI0315 20:54:03.994096 1571 log.go:172] (0xc0007b9400) (3) Data frame handling\nI0315 20:54:03.995835 1571 log.go:172] (0xc0006a62c0) Data frame received for 1\nI0315 20:54:03.995847 1571 log.go:172] (0xc0007b9360) (1) Data frame handling\nI0315 20:54:03.995853 1571 log.go:172] (0xc0007b9360) (1) Data frame sent\nI0315 20:54:03.995860 1571 log.go:172] (0xc0006a62c0) (0xc0007b9360) Stream removed, broadcasting: 1\nI0315 20:54:03.995964 1571 log.go:172] (0xc0006a62c0) Go away received\nI0315 20:54:03.995992 1571 log.go:172] (0xc0006a62c0) (0xc0007b9360) Stream removed, broadcasting: 1\nI0315 20:54:03.996003 1571 log.go:172] (0xc0006a62c0) (0xc0007b9400) Stream removed, broadcasting: 3\nI0315 20:54:03.996008 1571 log.go:172] (0xc0006a62c0) (0xc0001fc000) Stream removed, broadcasting: 5\n" Mar 15 20:54:03.998: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:54:03.998: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 20:54:04.020: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 15 20:54:14.024: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 15 20:54:14.024: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 20:54:14.106: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999583s Mar 15 20:54:15.110: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.929498972s Mar 15 20:54:16.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.925919183s Mar 15 20:54:17.269: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.771648839s Mar 15 20:54:18.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.766878148s Mar 15 20:54:19.446: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.594716261s Mar 15 20:54:20.450: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.590266754s Mar 15 20:54:21.454: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.585623539s Mar 15 20:54:22.508: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.582061722s Mar 15 20:54:23.511: INFO: Verifying statefulset ss doesn't scale past 1 for another 527.819708ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-njtbs Mar 15 20:54:24.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-njtbs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 20:54:24.988: INFO: stderr: "I0315 20:54:24.924269 1593 log.go:172] (0xc00015c840) (0xc00075c640) Create stream\nI0315 20:54:24.924323 1593 log.go:172] (0xc00015c840) (0xc00075c640) Stream added, broadcasting: 1\nI0315 20:54:24.926181 1593 log.go:172] (0xc00015c840) Reply frame received for 1\nI0315 20:54:24.926222 1593 log.go:172] (0xc00015c840) (0xc0005eed20) Create stream\nI0315 20:54:24.926234 1593 log.go:172] (0xc00015c840) (0xc0005eed20) Stream added, broadcasting: 3\nI0315 20:54:24.926928 1593 log.go:172] (0xc00015c840) Reply frame received for 3\nI0315 20:54:24.926959 1593 log.go:172] (0xc00015c840) (0xc0005eee60) Create stream\nI0315 20:54:24.926967 1593 log.go:172] (0xc00015c840) (0xc0005eee60) Stream added, broadcasting: 5\nI0315 20:54:24.927714 1593 log.go:172] (0xc00015c840) Reply frame received for 5\nI0315 20:54:24.983486 1593 log.go:172] (0xc00015c840) Data frame received for 5\nI0315 20:54:24.983537 1593 log.go:172] (0xc0005eee60) (5) Data frame handling\nI0315 20:54:24.983565 1593 log.go:172] (0xc00015c840) Data frame received for 3\nI0315 20:54:24.983576 1593 log.go:172] (0xc0005eed20) (3) Data frame handling\nI0315 20:54:24.983589 1593 log.go:172] (0xc0005eed20) (3) Data frame sent\nI0315 20:54:24.983600 1593 log.go:172] (0xc00015c840) Data frame received for 3\nI0315 20:54:24.983610 1593 log.go:172] (0xc0005eed20) (3) Data frame handling\nI0315 20:54:24.984977 1593 log.go:172] (0xc00015c840) Data frame received for 1\nI0315 20:54:24.985017 1593 log.go:172] (0xc00075c640) (1) Data frame handling\nI0315 20:54:24.985044 1593 log.go:172] (0xc00075c640) (1) Data frame sent\nI0315 20:54:24.985062 1593 log.go:172] (0xc00015c840) (0xc00075c640) Stream removed, broadcasting: 1\nI0315 20:54:24.985083 1593 log.go:172] (0xc00015c840) Go away received\nI0315 20:54:24.985399 1593 log.go:172] (0xc00015c840) (0xc00075c640) Stream removed, broadcasting: 1\nI0315 20:54:24.985423 1593 log.go:172] (0xc00015c840) (0xc0005eed20) Stream removed, broadcasting: 3\nI0315 20:54:24.985434 1593 log.go:172] (0xc00015c840) (0xc0005eee60) Stream removed, broadcasting: 5\n" Mar 15 20:54:24.988: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 20:54:24.988: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 20:54:25.065: INFO: Found 1 stateful pods, waiting for 3 Mar 15 20:54:35.070: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:54:35.070: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:54:35.070: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Mar 15 20:54:45.070: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:54:45.070: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 20:54:45.070: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 15 20:54:45.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-njtbs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:54:45.280: INFO: stderr: "I0315 20:54:45.202786 1616 log.go:172] (0xc000138580) (0xc00069b400) Create stream\nI0315 20:54:45.202845 1616 log.go:172] (0xc000138580) (0xc00069b400) Stream added, broadcasting: 1\nI0315 20:54:45.205602 1616 log.go:172] (0xc000138580) Reply frame received for 1\nI0315 20:54:45.205654 1616 log.go:172] (0xc000138580) (0xc00069b4a0) Create stream\nI0315 20:54:45.205674 1616 log.go:172] (0xc000138580) (0xc00069b4a0) Stream added, broadcasting: 3\nI0315 20:54:45.206851 1616 log.go:172] (0xc000138580) Reply frame received for 3\nI0315 20:54:45.206900 1616 log.go:172] (0xc000138580) (0xc0002c2000) Create stream\nI0315 20:54:45.206914 1616 log.go:172] (0xc000138580) (0xc0002c2000) Stream added, broadcasting: 5\nI0315 20:54:45.207994 1616 log.go:172] (0xc000138580) Reply frame received for 5\nI0315 20:54:45.276245 1616 log.go:172] (0xc000138580) Data frame received for 3\nI0315 20:54:45.276287 1616 log.go:172] (0xc00069b4a0) (3) Data frame handling\nI0315 20:54:45.276302 1616 log.go:172] (0xc00069b4a0) (3) Data frame sent\nI0315 20:54:45.276312 1616 log.go:172] (0xc000138580) Data frame received for 3\nI0315 20:54:45.276323 1616 log.go:172] (0xc00069b4a0) (3) Data frame handling\nI0315 20:54:45.276357 1616 log.go:172] (0xc000138580) Data frame received for 5\nI0315 20:54:45.276367 1616 log.go:172] (0xc0002c2000) (5) Data frame handling\nI0315 20:54:45.277991 1616 log.go:172] (0xc000138580) Data frame received for 1\nI0315 20:54:45.278013 1616 log.go:172] (0xc00069b400) (1) Data frame handling\nI0315 20:54:45.278043 1616 log.go:172] (0xc00069b400) (1) Data frame sent\nI0315 20:54:45.278069 1616 log.go:172] (0xc000138580) (0xc00069b400) Stream removed, broadcasting: 1\nI0315 20:54:45.278160 1616 log.go:172] (0xc000138580) Go away received\nI0315 20:54:45.278364 1616 log.go:172] (0xc000138580) (0xc00069b400) Stream removed, broadcasting: 1\nI0315 20:54:45.278393 1616 log.go:172] (0xc000138580) (0xc00069b4a0) Stream removed, broadcasting: 3\nI0315 20:54:45.278412 1616 log.go:172] (0xc000138580) (0xc0002c2000) Stream removed, broadcasting: 5\n" Mar 15 20:54:45.280: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:54:45.280: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 20:54:45.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-njtbs ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:54:45.500: INFO: stderr: "I0315 20:54:45.389601 1639 log.go:172] (0xc000806370) (0xc0006d8640) Create stream\nI0315 20:54:45.389662 1639 log.go:172] (0xc000806370) (0xc0006d8640) Stream added, broadcasting: 1\nI0315 20:54:45.394341 1639 log.go:172] (0xc000806370) Reply frame received for 1\nI0315 20:54:45.394375 1639 log.go:172] (0xc000806370) (0xc000674d20) Create stream\nI0315 20:54:45.394385 1639 log.go:172] (0xc000806370) (0xc000674d20) Stream added, broadcasting: 3\nI0315 20:54:45.395294 1639 log.go:172] (0xc000806370) Reply frame received for 3\nI0315 20:54:45.395327 1639 log.go:172] (0xc000806370) (0xc0006d86e0) Create stream\nI0315 20:54:45.395336 1639 log.go:172] (0xc000806370) (0xc0006d86e0) Stream added, broadcasting: 5\nI0315 20:54:45.396115 1639 log.go:172] (0xc000806370) Reply frame received for 5\nI0315 20:54:45.494175 1639 log.go:172] (0xc000806370) Data frame received for 3\nI0315 20:54:45.494209 1639 log.go:172] (0xc000674d20) (3) Data frame handling\nI0315 20:54:45.494220 1639 log.go:172] (0xc000674d20) (3) Data frame sent\nI0315 20:54:45.494243 1639 log.go:172] (0xc000806370) Data frame received for 5\nI0315 20:54:45.494251 1639 log.go:172] (0xc0006d86e0) (5) Data frame handling\nI0315 20:54:45.494301 1639 log.go:172] (0xc000806370) Data frame received for 3\nI0315 20:54:45.494314 1639 log.go:172] (0xc000674d20) (3) Data frame handling\nI0315 20:54:45.496160 1639 log.go:172] (0xc000806370) Data frame received for 1\nI0315 20:54:45.496179 1639 log.go:172] (0xc0006d8640) (1) Data frame handling\nI0315 20:54:45.496214 1639 log.go:172] (0xc0006d8640) (1) Data frame sent\nI0315 20:54:45.496228 1639 log.go:172] (0xc000806370) (0xc0006d8640) Stream removed, broadcasting: 1\nI0315 20:54:45.496242 1639 log.go:172] (0xc000806370) Go away received\nI0315 20:54:45.496613 1639 log.go:172] (0xc000806370) (0xc0006d8640) Stream removed, broadcasting: 1\nI0315 20:54:45.496638 1639 log.go:172] (0xc000806370) (0xc000674d20) Stream removed, broadcasting: 3\nI0315 20:54:45.496650 1639 log.go:172] (0xc000806370) (0xc0006d86e0) Stream removed, broadcasting: 5\n" Mar 15 20:54:45.500: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:54:45.500: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 20:54:45.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-njtbs ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 20:54:45.739: INFO: stderr: "I0315 20:54:45.631855 1661 log.go:172] (0xc000138840) (0xc0003e3360) Create stream\nI0315 20:54:45.631910 1661 log.go:172] (0xc000138840) (0xc0003e3360) Stream added, broadcasting: 1\nI0315 20:54:45.634429 1661 log.go:172] (0xc000138840) Reply frame received for 1\nI0315 20:54:45.634485 1661 log.go:172] (0xc000138840) (0xc0003e3400) Create stream\nI0315 20:54:45.634501 1661 log.go:172] (0xc000138840) (0xc0003e3400) Stream added, broadcasting: 3\nI0315 20:54:45.635614 1661 log.go:172] (0xc000138840) Reply frame received for 3\nI0315 20:54:45.635688 1661 log.go:172] (0xc000138840) (0xc00038a000) Create stream\nI0315 20:54:45.635718 1661 log.go:172] (0xc000138840) (0xc00038a000) Stream added, broadcasting: 5\nI0315 20:54:45.636696 1661 log.go:172] (0xc000138840) Reply frame received for 5\nI0315 20:54:45.734116 1661 log.go:172] (0xc000138840) Data frame received for 3\nI0315 20:54:45.734152 1661 log.go:172] (0xc0003e3400) (3) Data frame handling\nI0315 20:54:45.734172 1661 log.go:172] (0xc0003e3400) (3) Data frame sent\nI0315 20:54:45.734182 1661 log.go:172] (0xc000138840) Data frame received for 3\nI0315 20:54:45.734188 1661 log.go:172] (0xc0003e3400) (3) Data frame handling\nI0315 20:54:45.734391 1661 log.go:172] (0xc000138840) Data frame received for 5\nI0315 20:54:45.734413 1661 log.go:172] (0xc00038a000) (5) Data frame handling\nI0315 20:54:45.735871 1661 log.go:172] (0xc000138840) Data frame received for 1\nI0315 20:54:45.735918 1661 log.go:172] (0xc0003e3360) (1) Data frame handling\nI0315 20:54:45.735958 1661 log.go:172] (0xc0003e3360) (1) Data frame sent\nI0315 20:54:45.735974 1661 log.go:172] (0xc000138840) (0xc0003e3360) Stream removed, broadcasting: 1\nI0315 20:54:45.735991 1661 log.go:172] (0xc000138840) Go away received\nI0315 20:54:45.736193 1661 log.go:172] (0xc000138840) (0xc0003e3360) Stream removed, broadcasting: 1\nI0315 20:54:45.736207 1661 log.go:172] (0xc000138840) (0xc0003e3400) Stream removed, broadcasting: 3\nI0315 20:54:45.736212 1661 log.go:172] (0xc000138840) (0xc00038a000) Stream removed, broadcasting: 5\n" Mar 15 20:54:45.739: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 20:54:45.739: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 20:54:45.739: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 20:54:45.742: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 15 20:54:55.750: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 15 20:54:55.750: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 15 20:54:55.750: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 15 20:54:55.764: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999292s Mar 15 20:54:56.827: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991946899s Mar 15 20:54:57.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.928728081s Mar 15 20:54:58.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.924547032s Mar 15 20:54:59.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.899417366s Mar 15 20:55:00.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.894134618s Mar 15 20:55:01.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.888780117s Mar 15 20:55:02.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.884005687s Mar 15 20:55:03.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.878014035s Mar 15 20:55:04.888: INFO: Verifying statefulset ss doesn't scale past 3 for another 873.214612ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-njtbs Mar 15 20:55:05.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-njtbs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 20:55:06.080: INFO: stderr: "I0315 20:55:06.020895 1682 log.go:172] (0xc00014c840) (0xc00076e640) Create stream\nI0315 20:55:06.020975 1682 log.go:172] (0xc00014c840) (0xc00076e640) Stream added, broadcasting: 1\nI0315 20:55:06.023618 1682 log.go:172] (0xc00014c840) Reply frame received for 1\nI0315 20:55:06.023690 1682 log.go:172] (0xc00014c840) (0xc000600d20) Create stream\nI0315 20:55:06.023715 1682 log.go:172] (0xc00014c840) (0xc000600d20) Stream added, broadcasting: 3\nI0315 20:55:06.024756 1682 log.go:172] (0xc00014c840) Reply frame received for 3\nI0315 20:55:06.024799 1682 log.go:172] (0xc00014c840) (0xc00076e6e0) Create stream\nI0315 20:55:06.024809 1682 log.go:172] (0xc00014c840) (0xc00076e6e0) Stream added, broadcasting: 5\nI0315 20:55:06.026028 1682 log.go:172] (0xc00014c840) Reply frame received for 5\nI0315 20:55:06.075196 1682 log.go:172] (0xc00014c840) Data frame received for 5\nI0315 20:55:06.075228 1682 log.go:172] (0xc00076e6e0) (5) Data frame handling\nI0315 20:55:06.075259 1682 log.go:172] (0xc00014c840) Data frame received for 3\nI0315 20:55:06.075301 1682 log.go:172] (0xc000600d20) (3) Data frame handling\nI0315 20:55:06.075324 1682 log.go:172] (0xc000600d20) (3) Data frame sent\nI0315 20:55:06.075341 1682 log.go:172] (0xc00014c840) Data frame received for 3\nI0315 20:55:06.075357 1682 log.go:172] (0xc000600d20) (3) Data frame handling\nI0315 20:55:06.076776 1682 log.go:172] (0xc00014c840) Data frame received for 1\nI0315 20:55:06.076807 1682 log.go:172] (0xc00076e640) (1) Data frame handling\nI0315 20:55:06.076824 1682 log.go:172] (0xc00076e640) (1) Data frame sent\nI0315 20:55:06.076847 1682 log.go:172] (0xc00014c840) (0xc00076e640) Stream removed, broadcasting: 1\nI0315 20:55:06.076876 1682 log.go:172] (0xc00014c840) Go away received\nI0315 20:55:06.077053 1682 log.go:172] (0xc00014c840) (0xc00076e640) Stream removed, broadcasting: 1\nI0315 20:55:06.077083 1682 log.go:172] (0xc00014c840) (0xc000600d20) Stream removed, broadcasting: 3\nI0315 20:55:06.077102 1682 log.go:172] (0xc00014c840) (0xc00076e6e0) Stream removed, broadcasting: 5\n" Mar 15 20:55:06.080: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 20:55:06.080: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 20:55:06.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-njtbs ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 20:55:06.285: INFO: stderr: "I0315 20:55:06.210353 1704 log.go:172] (0xc000152790) (0xc000687360) Create stream\nI0315 20:55:06.210411 1704 log.go:172] (0xc000152790) (0xc000687360) Stream added, broadcasting: 1\nI0315 20:55:06.212558 1704 log.go:172] (0xc000152790) Reply frame received for 1\nI0315 20:55:06.212638 1704 log.go:172] (0xc000152790) (0xc0004d6000) Create stream\nI0315 20:55:06.212669 1704 log.go:172] (0xc000152790) (0xc0004d6000) Stream added, broadcasting: 3\nI0315 20:55:06.213996 1704 log.go:172] (0xc000152790) Reply frame received for 3\nI0315 20:55:06.214033 1704 log.go:172] (0xc000152790) (0xc000208000) Create stream\nI0315 20:55:06.214043 1704 log.go:172] (0xc000152790) (0xc000208000) Stream added, broadcasting: 5\nI0315 20:55:06.214875 1704 log.go:172] (0xc000152790) Reply frame received for 5\nI0315 20:55:06.280565 1704 log.go:172] (0xc000152790) Data frame received for 3\nI0315 20:55:06.280604 1704 log.go:172] (0xc0004d6000) (3) Data frame handling\nI0315 20:55:06.280625 1704 log.go:172] (0xc0004d6000) (3) Data frame sent\nI0315 20:55:06.280638 1704 log.go:172] (0xc000152790) Data frame received for 3\nI0315 20:55:06.280654 1704 log.go:172] (0xc0004d6000) (3) Data frame handling\nI0315 20:55:06.280677 1704 log.go:172] (0xc000152790) Data frame received for 5\nI0315 20:55:06.280801 1704 log.go:172] (0xc000208000) (5) Data frame handling\nI0315 20:55:06.282811 1704 log.go:172] (0xc000152790) Data frame received for 1\nI0315 20:55:06.282836 1704 log.go:172] (0xc000687360) (1) Data frame handling\nI0315 20:55:06.282846 1704 log.go:172] (0xc000687360) (1) Data frame sent\nI0315 20:55:06.282856 1704 log.go:172] (0xc000152790) (0xc000687360) Stream removed, broadcasting: 1\nI0315 20:55:06.282890 1704 log.go:172] (0xc000152790) Go away received\nI0315 20:55:06.283010 1704 log.go:172] (0xc000152790) (0xc000687360) Stream removed, broadcasting: 1\nI0315 20:55:06.283027 1704 log.go:172] (0xc000152790) (0xc0004d6000) Stream removed, broadcasting: 3\nI0315 20:55:06.283035 1704 log.go:172] (0xc000152790) (0xc000208000) Stream removed, broadcasting: 5\n" Mar 15 20:55:06.285: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 20:55:06.285: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 20:55:06.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-njtbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 20:55:06.493: INFO: stderr: "I0315 20:55:06.436421 1727 log.go:172] (0xc000138840) (0xc0000f14a0) Create stream\nI0315 20:55:06.436489 1727 log.go:172] (0xc000138840) (0xc0000f14a0) Stream added, broadcasting: 1\nI0315 20:55:06.439157 1727 log.go:172] (0xc000138840) Reply frame received for 1\nI0315 20:55:06.439223 1727 log.go:172] (0xc000138840) (0xc0000f1540) Create stream\nI0315 20:55:06.439253 1727 log.go:172] (0xc000138840) (0xc0000f1540) Stream added, broadcasting: 3\nI0315 20:55:06.440330 1727 log.go:172] (0xc000138840) Reply frame received for 3\nI0315 20:55:06.440392 1727 log.go:172] (0xc000138840) (0xc00030e000) Create stream\nI0315 20:55:06.440415 1727 log.go:172] (0xc000138840) (0xc00030e000) Stream added, broadcasting: 5\nI0315 20:55:06.441802 1727 log.go:172] (0xc000138840) Reply frame received for 5\nI0315 20:55:06.487673 1727 log.go:172] (0xc000138840) Data frame received for 3\nI0315 20:55:06.487698 1727 log.go:172] (0xc0000f1540) (3) Data frame handling\nI0315 20:55:06.487709 1727 log.go:172] (0xc0000f1540) (3) Data frame sent\nI0315 20:55:06.487750 1727 log.go:172] (0xc000138840) Data frame received for 5\nI0315 20:55:06.487775 1727 log.go:172] (0xc00030e000) (5) Data frame handling\nI0315 20:55:06.487916 1727 log.go:172] (0xc000138840) Data frame received for 3\nI0315 20:55:06.487951 1727 log.go:172] (0xc0000f1540) (3) Data frame handling\nI0315 20:55:06.489641 1727 log.go:172] (0xc000138840) Data frame received for 1\nI0315 20:55:06.489659 1727 log.go:172] (0xc0000f14a0) (1) Data frame handling\nI0315 20:55:06.489678 1727 log.go:172] (0xc0000f14a0) (1) Data frame sent\nI0315 20:55:06.489891 1727 log.go:172] (0xc000138840) (0xc0000f14a0) Stream removed, broadcasting: 1\nI0315 20:55:06.489970 1727 log.go:172] (0xc000138840) Go away received\nI0315 20:55:06.490099 1727 log.go:172] (0xc000138840) (0xc0000f14a0) Stream removed, broadcasting: 1\nI0315 20:55:06.490130 1727 log.go:172] (0xc000138840) (0xc0000f1540) Stream removed, broadcasting: 3\nI0315 20:55:06.490143 1727 log.go:172] (0xc000138840) (0xc00030e000) Stream removed, broadcasting: 5\n" Mar 15 20:55:06.493: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 20:55:06.493: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 20:55:06.493: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 15 20:55:36.509: INFO: Deleting all statefulset in ns e2e-tests-statefulset-njtbs Mar 15 20:55:36.512: INFO: Scaling statefulset ss to 0 Mar 15 20:55:36.520: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 20:55:36.522: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:55:36.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-njtbs" for this suite. Mar 15 20:55:42.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:55:42.682: INFO: namespace: e2e-tests-statefulset-njtbs, resource: bindings, ignored listing per whitelist Mar 15 20:55:42.683: INFO: namespace e2e-tests-statefulset-njtbs deletion completed in 6.13402444s • [SLOW TEST:109.599 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:55:42.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 15 20:55:42.791: INFO: Waiting up to 5m0s for pod "pod-55a4c18f-66ff-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-dsddt" to be "success or failure" Mar 15 20:55:42.794: INFO: Pod "pod-55a4c18f-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.717144ms Mar 15 20:55:44.798: INFO: Pod "pod-55a4c18f-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006991019s Mar 15 20:55:46.801: INFO: Pod "pod-55a4c18f-66ff-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010487765s STEP: Saw pod success Mar 15 20:55:46.801: INFO: Pod "pod-55a4c18f-66ff-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:55:46.803: INFO: Trying to get logs from node hunter-worker pod pod-55a4c18f-66ff-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:55:46.825: INFO: Waiting for pod pod-55a4c18f-66ff-11ea-9ccf-0242ac110012 to disappear Mar 15 20:55:46.842: INFO: Pod pod-55a4c18f-66ff-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:55:46.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dsddt" for this suite. Mar 15 20:55:52.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:55:52.960: INFO: namespace: e2e-tests-emptydir-dsddt, resource: bindings, ignored listing per whitelist Mar 15 20:55:52.996: INFO: namespace e2e-tests-emptydir-dsddt deletion completed in 6.150973387s • [SLOW TEST:10.313 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:55:52.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 15 20:55:53.139: INFO: Waiting up to 5m0s for pod "pod-5bd0fbb9-66ff-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-2v24q" to be "success or failure" Mar 15 20:55:53.142: INFO: Pod "pod-5bd0fbb9-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.179081ms Mar 15 20:55:55.216: INFO: Pod "pod-5bd0fbb9-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07686234s Mar 15 20:55:57.220: INFO: Pod "pod-5bd0fbb9-66ff-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081231565s STEP: Saw pod success Mar 15 20:55:57.220: INFO: Pod "pod-5bd0fbb9-66ff-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:55:57.223: INFO: Trying to get logs from node hunter-worker2 pod pod-5bd0fbb9-66ff-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:55:57.239: INFO: Waiting for pod pod-5bd0fbb9-66ff-11ea-9ccf-0242ac110012 to disappear Mar 15 20:55:57.244: INFO: Pod pod-5bd0fbb9-66ff-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:55:57.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2v24q" for this suite. Mar 15 20:56:03.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:56:03.329: INFO: namespace: e2e-tests-emptydir-2v24q, resource: bindings, ignored listing per whitelist Mar 15 20:56:03.362: INFO: namespace e2e-tests-emptydir-2v24q deletion completed in 6.114999472s • [SLOW TEST:10.366 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:56:03.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 15 20:56:03.475: INFO: Waiting up to 5m0s for pod "pod-61f8169d-66ff-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-kbhgf" to be "success or failure" Mar 15 20:56:03.490: INFO: Pod "pod-61f8169d-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 14.927701ms Mar 15 20:56:05.494: INFO: Pod "pod-61f8169d-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018958062s Mar 15 20:56:07.499: INFO: Pod "pod-61f8169d-66ff-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024310993s STEP: Saw pod success Mar 15 20:56:07.499: INFO: Pod "pod-61f8169d-66ff-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:56:07.503: INFO: Trying to get logs from node hunter-worker pod pod-61f8169d-66ff-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:56:07.536: INFO: Waiting for pod pod-61f8169d-66ff-11ea-9ccf-0242ac110012 to disappear Mar 15 20:56:07.549: INFO: Pod pod-61f8169d-66ff-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:56:07.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kbhgf" for this suite. Mar 15 20:56:13.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:56:13.616: INFO: namespace: e2e-tests-emptydir-kbhgf, resource: bindings, ignored listing per whitelist Mar 15 20:56:13.653: INFO: namespace e2e-tests-emptydir-kbhgf deletion completed in 6.100404828s • [SLOW TEST:10.291 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:56:13.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 20:56:13.878: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Mar 15 20:56:13.883: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9l7zj/daemonsets","resourceVersion":"22991"},"items":null} Mar 15 20:56:13.885: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9l7zj/pods","resourceVersion":"22991"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:56:13.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9l7zj" for this suite. Mar 15 20:56:19.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:56:19.977: INFO: namespace: e2e-tests-daemonsets-9l7zj, resource: bindings, ignored listing per whitelist Mar 15 20:56:19.979: INFO: namespace e2e-tests-daemonsets-9l7zj deletion completed in 6.084013864s S [SKIPPING] [6.326 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 20:56:13.878: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:56:19.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 20:56:20.092: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6bde282e-66ff-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-cwnzw" to be "success or failure" Mar 15 20:56:20.103: INFO: Pod "downwardapi-volume-6bde282e-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 10.843ms Mar 15 20:56:22.107: INFO: Pod "downwardapi-volume-6bde282e-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015220843s Mar 15 20:56:24.112: INFO: Pod "downwardapi-volume-6bde282e-66ff-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019484562s STEP: Saw pod success Mar 15 20:56:24.112: INFO: Pod "downwardapi-volume-6bde282e-66ff-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:56:24.115: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6bde282e-66ff-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 20:56:24.132: INFO: Waiting for pod downwardapi-volume-6bde282e-66ff-11ea-9ccf-0242ac110012 to disappear Mar 15 20:56:24.152: INFO: Pod downwardapi-volume-6bde282e-66ff-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:56:24.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cwnzw" for this suite. Mar 15 20:56:30.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:56:30.211: INFO: namespace: e2e-tests-downward-api-cwnzw, resource: bindings, ignored listing per whitelist Mar 15 20:56:30.273: INFO: namespace e2e-tests-downward-api-cwnzw deletion completed in 6.118729217s • [SLOW TEST:10.294 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:56:30.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7202e40e-66ff-11ea-9ccf-0242ac110012 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7202e40e-66ff-11ea-9ccf-0242ac110012 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:57:47.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kjqp9" for this suite. Mar 15 20:58:11.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:58:11.348: INFO: namespace: e2e-tests-projected-kjqp9, resource: bindings, ignored listing per whitelist Mar 15 20:58:11.391: INFO: namespace e2e-tests-projected-kjqp9 deletion completed in 24.103820801s • [SLOW TEST:101.117 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:58:11.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 20:58:11.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae527683-66ff-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-ftwks" to be "success or failure" Mar 15 20:58:11.588: INFO: Pod "downwardapi-volume-ae527683-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 26.334829ms Mar 15 20:58:13.613: INFO: Pod "downwardapi-volume-ae527683-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050689615s Mar 15 20:58:15.667: INFO: Pod "downwardapi-volume-ae527683-66ff-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104817343s STEP: Saw pod success Mar 15 20:58:15.667: INFO: Pod "downwardapi-volume-ae527683-66ff-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:58:15.670: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ae527683-66ff-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 20:58:15.754: INFO: Waiting for pod downwardapi-volume-ae527683-66ff-11ea-9ccf-0242ac110012 to disappear Mar 15 20:58:15.761: INFO: Pod downwardapi-volume-ae527683-66ff-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:58:15.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ftwks" for this suite. Mar 15 20:58:21.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:58:21.811: INFO: namespace: e2e-tests-downward-api-ftwks, resource: bindings, ignored listing per whitelist Mar 15 20:58:21.866: INFO: namespace e2e-tests-downward-api-ftwks deletion completed in 6.102256992s • [SLOW TEST:10.474 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:58:21.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 20:58:21.968: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:58:23.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-g672s" for this suite. Mar 15 20:58:29.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:58:29.273: INFO: namespace: e2e-tests-custom-resource-definition-g672s, resource: bindings, ignored listing per whitelist Mar 15 20:58:29.334: INFO: namespace e2e-tests-custom-resource-definition-g672s deletion completed in 6.296236337s • [SLOW TEST:7.468 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:58:29.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-lx9fn/secret-test-b92a4c66-66ff-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 20:58:29.903: INFO: Waiting up to 5m0s for pod "pod-configmaps-b93f0107-66ff-11ea-9ccf-0242ac110012" in namespace "e2e-tests-secrets-lx9fn" to be "success or failure" Mar 15 20:58:29.931: INFO: Pod "pod-configmaps-b93f0107-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 27.789386ms Mar 15 20:58:31.935: INFO: Pod "pod-configmaps-b93f0107-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031834169s Mar 15 20:58:33.939: INFO: Pod "pod-configmaps-b93f0107-66ff-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036119101s STEP: Saw pod success Mar 15 20:58:33.939: INFO: Pod "pod-configmaps-b93f0107-66ff-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:58:33.943: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-b93f0107-66ff-11ea-9ccf-0242ac110012 container env-test: STEP: delete the pod Mar 15 20:58:33.961: INFO: Waiting for pod pod-configmaps-b93f0107-66ff-11ea-9ccf-0242ac110012 to disappear Mar 15 20:58:33.981: INFO: Pod pod-configmaps-b93f0107-66ff-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:58:33.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lx9fn" for this suite. Mar 15 20:58:40.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:58:40.112: INFO: namespace: e2e-tests-secrets-lx9fn, resource: bindings, ignored listing per whitelist Mar 15 20:58:40.265: INFO: namespace e2e-tests-secrets-lx9fn deletion completed in 6.280101344s • [SLOW TEST:10.931 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:58:40.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 15 20:58:40.400: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ddrrl,SelfLink:/api/v1/namespaces/e2e-tests-watch-ddrrl/configmaps/e2e-watch-test-resource-version,UID:bf7c6251-66ff-11ea-99e8-0242ac110002,ResourceVersion:23400,Generation:0,CreationTimestamp:2020-03-15 20:58:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 15 20:58:40.400: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ddrrl,SelfLink:/api/v1/namespaces/e2e-tests-watch-ddrrl/configmaps/e2e-watch-test-resource-version,UID:bf7c6251-66ff-11ea-99e8-0242ac110002,ResourceVersion:23401,Generation:0,CreationTimestamp:2020-03-15 20:58:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:58:40.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-ddrrl" for this suite. Mar 15 20:58:46.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:58:46.469: INFO: namespace: e2e-tests-watch-ddrrl, resource: bindings, ignored listing per whitelist Mar 15 20:58:46.556: INFO: namespace e2e-tests-watch-ddrrl deletion completed in 6.152372788s • [SLOW TEST:6.291 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:58:46.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 15 20:58:46.710: INFO: Waiting up to 5m0s for pod "pod-c3446074-66ff-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-zmppf" to be "success or failure" Mar 15 20:58:46.714: INFO: Pod "pod-c3446074-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.772635ms Mar 15 20:58:48.717: INFO: Pod "pod-c3446074-66ff-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007261425s Mar 15 20:58:50.721: INFO: Pod "pod-c3446074-66ff-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01078425s STEP: Saw pod success Mar 15 20:58:50.721: INFO: Pod "pod-c3446074-66ff-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 20:58:50.724: INFO: Trying to get logs from node hunter-worker2 pod pod-c3446074-66ff-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 20:58:50.901: INFO: Waiting for pod pod-c3446074-66ff-11ea-9ccf-0242ac110012 to disappear Mar 15 20:58:51.188: INFO: Pod pod-c3446074-66ff-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:58:51.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zmppf" for this suite. Mar 15 20:58:57.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:58:57.618: INFO: namespace: e2e-tests-emptydir-zmppf, resource: bindings, ignored listing per whitelist Mar 15 20:58:57.639: INFO: namespace e2e-tests-emptydir-zmppf deletion completed in 6.447707043s • [SLOW TEST:11.083 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:58:57.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-hl25 STEP: Creating a pod to test atomic-volume-subpath Mar 15 20:58:57.775: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hl25" in namespace "e2e-tests-subpath-9qf4s" to be "success or failure" Mar 15 20:58:57.802: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Pending", Reason="", readiness=false. Elapsed: 26.961926ms Mar 15 20:58:59.841: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065708338s Mar 15 20:59:01.845: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069913398s Mar 15 20:59:03.849: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 6.073982076s Mar 15 20:59:05.853: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 8.077536267s Mar 15 20:59:07.856: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 10.080588217s Mar 15 20:59:09.871: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 12.095524448s Mar 15 20:59:11.925: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 14.14932201s Mar 15 20:59:13.949: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 16.173745437s Mar 15 20:59:15.954: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 18.178300458s Mar 15 20:59:17.961: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 20.185940725s Mar 15 20:59:19.973: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 22.197978071s Mar 15 20:59:21.978: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Running", Reason="", readiness=false. Elapsed: 24.20254903s Mar 15 20:59:23.997: INFO: Pod "pod-subpath-test-downwardapi-hl25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.221720386s STEP: Saw pod success Mar 15 20:59:23.997: INFO: Pod "pod-subpath-test-downwardapi-hl25" satisfied condition "success or failure" Mar 15 20:59:24.000: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-hl25 container test-container-subpath-downwardapi-hl25: STEP: delete the pod Mar 15 20:59:24.181: INFO: Waiting for pod pod-subpath-test-downwardapi-hl25 to disappear Mar 15 20:59:24.199: INFO: Pod pod-subpath-test-downwardapi-hl25 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-hl25 Mar 15 20:59:24.199: INFO: Deleting pod "pod-subpath-test-downwardapi-hl25" in namespace "e2e-tests-subpath-9qf4s" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:59:24.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-9qf4s" for this suite. Mar 15 20:59:30.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 20:59:30.425: INFO: namespace: e2e-tests-subpath-9qf4s, resource: bindings, ignored listing per whitelist Mar 15 20:59:30.470: INFO: namespace e2e-tests-subpath-9qf4s deletion completed in 6.143195314s • [SLOW TEST:32.831 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 20:59:30.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 20:59:30.542: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 20:59:34.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jktwv" for this suite. Mar 15 21:00:24.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:00:24.800: INFO: namespace: e2e-tests-pods-jktwv, resource: bindings, ignored listing per whitelist Mar 15 21:00:24.848: INFO: namespace e2e-tests-pods-jktwv deletion completed in 50.093829975s • [SLOW TEST:54.378 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:00:24.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Mar 15 21:00:24.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 15 21:00:25.111: INFO: stderr: "" Mar 15 21:00:25.111: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:00:25.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nxvkg" for this suite. Mar 15 21:00:31.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:00:31.154: INFO: namespace: e2e-tests-kubectl-nxvkg, resource: bindings, ignored listing per whitelist Mar 15 21:00:31.212: INFO: namespace e2e-tests-kubectl-nxvkg deletion completed in 6.096068193s • [SLOW TEST:6.363 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:00:31.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:00:37.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-b45nq" for this suite. Mar 15 21:01:17.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:01:17.560: INFO: namespace: e2e-tests-kubelet-test-b45nq, resource: bindings, ignored listing per whitelist Mar 15 21:01:17.578: INFO: namespace e2e-tests-kubelet-test-b45nq deletion completed in 40.099033587s • [SLOW TEST:46.366 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:01:17.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-1d44fee7-6700-11ea-9ccf-0242ac110012 STEP: Creating secret with name s-test-opt-upd-1d44ff62-6700-11ea-9ccf-0242ac110012 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1d44fee7-6700-11ea-9ccf-0242ac110012 STEP: Updating secret s-test-opt-upd-1d44ff62-6700-11ea-9ccf-0242ac110012 STEP: Creating secret with name s-test-opt-create-1d44ff95-6700-11ea-9ccf-0242ac110012 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:02:49.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tsdkp" for this suite. Mar 15 21:03:11.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:03:11.965: INFO: namespace: e2e-tests-projected-tsdkp, resource: bindings, ignored listing per whitelist Mar 15 21:03:11.990: INFO: namespace e2e-tests-projected-tsdkp deletion completed in 22.200667465s • [SLOW TEST:114.412 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:03:11.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 21:03:12.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61716bb9-6700-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-8p7mj" to be "success or failure" Mar 15 21:03:12.096: INFO: Pod "downwardapi-volume-61716bb9-6700-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 19.892514ms Mar 15 21:03:14.122: INFO: Pod "downwardapi-volume-61716bb9-6700-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046211894s Mar 15 21:03:16.127: INFO: Pod "downwardapi-volume-61716bb9-6700-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050420431s STEP: Saw pod success Mar 15 21:03:16.127: INFO: Pod "downwardapi-volume-61716bb9-6700-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:03:16.130: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-61716bb9-6700-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 21:03:16.171: INFO: Waiting for pod downwardapi-volume-61716bb9-6700-11ea-9ccf-0242ac110012 to disappear Mar 15 21:03:16.182: INFO: Pod downwardapi-volume-61716bb9-6700-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:03:16.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8p7mj" for this suite. Mar 15 21:03:22.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:03:22.252: INFO: namespace: e2e-tests-downward-api-8p7mj, resource: bindings, ignored listing per whitelist Mar 15 21:03:22.313: INFO: namespace e2e-tests-downward-api-8p7mj deletion completed in 6.127146675s • [SLOW TEST:10.323 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:03:22.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:03:28.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-h4rgt" for this suite. Mar 15 21:03:34.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:03:34.787: INFO: namespace: e2e-tests-namespaces-h4rgt, resource: bindings, ignored listing per whitelist Mar 15 21:03:34.809: INFO: namespace e2e-tests-namespaces-h4rgt deletion completed in 6.151720566s STEP: Destroying namespace "e2e-tests-nsdeletetest-tq297" for this suite. Mar 15 21:03:34.812: INFO: Namespace e2e-tests-nsdeletetest-tq297 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-mbbm6" for this suite. Mar 15 21:03:40.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:03:40.893: INFO: namespace: e2e-tests-nsdeletetest-mbbm6, resource: bindings, ignored listing per whitelist Mar 15 21:03:40.896: INFO: namespace e2e-tests-nsdeletetest-mbbm6 deletion completed in 6.084590303s • [SLOW TEST:18.583 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:03:40.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:04:14.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-sk2t5" for this suite. Mar 15 21:04:22.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:04:23.012: INFO: namespace: e2e-tests-container-runtime-sk2t5, resource: bindings, ignored listing per whitelist Mar 15 21:04:23.062: INFO: namespace e2e-tests-container-runtime-sk2t5 deletion completed in 8.08435149s • [SLOW TEST:42.166 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:04:23.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 21:04:23.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-g9ng6' Mar 15 21:04:33.647: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 15 21:04:33.647: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 15 21:04:33.678: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 15 21:04:33.689: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 15 21:04:33.716: INFO: scanned /root for discovery docs: Mar 15 21:04:33.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-g9ng6' Mar 15 21:04:50.555: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 15 21:04:50.555: INFO: stdout: "Created e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff\nScaling up e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 15 21:04:50.555: INFO: stdout: "Created e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff\nScaling up e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 15 21:04:50.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-g9ng6' Mar 15 21:04:50.667: INFO: stderr: "" Mar 15 21:04:50.667: INFO: stdout: "e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff-wqhfc " Mar 15 21:04:50.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff-wqhfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g9ng6' Mar 15 21:04:50.754: INFO: stderr: "" Mar 15 21:04:50.754: INFO: stdout: "true" Mar 15 21:04:50.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff-wqhfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g9ng6' Mar 15 21:04:50.846: INFO: stderr: "" Mar 15 21:04:50.846: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 15 21:04:50.846: INFO: e2e-test-nginx-rc-48f2a44c617e8b26991fbf24b99f5cff-wqhfc is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Mar 15 21:04:50.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-g9ng6' Mar 15 21:04:50.948: INFO: stderr: "" Mar 15 21:04:50.948: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:04:50.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g9ng6" for this suite. Mar 15 21:05:13.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:05:13.098: INFO: namespace: e2e-tests-kubectl-g9ng6, resource: bindings, ignored listing per whitelist Mar 15 21:05:13.127: INFO: namespace e2e-tests-kubectl-g9ng6 deletion completed in 22.088820377s • [SLOW TEST:50.065 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:05:13.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Mar 15 21:05:13.447: INFO: Waiting up to 5m0s for pod "client-containers-a9c92994-6700-11ea-9ccf-0242ac110012" in namespace "e2e-tests-containers-pzhcl" to be "success or failure" Mar 15 21:05:13.463: INFO: Pod "client-containers-a9c92994-6700-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 16.054079ms Mar 15 21:05:15.467: INFO: Pod "client-containers-a9c92994-6700-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020056621s Mar 15 21:05:17.471: INFO: Pod "client-containers-a9c92994-6700-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023974497s STEP: Saw pod success Mar 15 21:05:17.471: INFO: Pod "client-containers-a9c92994-6700-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:05:17.474: INFO: Trying to get logs from node hunter-worker pod client-containers-a9c92994-6700-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 21:05:17.494: INFO: Waiting for pod client-containers-a9c92994-6700-11ea-9ccf-0242ac110012 to disappear Mar 15 21:05:17.499: INFO: Pod client-containers-a9c92994-6700-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:05:17.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-pzhcl" for this suite. Mar 15 21:05:23.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:05:23.584: INFO: namespace: e2e-tests-containers-pzhcl, resource: bindings, ignored listing per whitelist Mar 15 21:05:23.704: INFO: namespace e2e-tests-containers-pzhcl deletion completed in 6.202604099s • [SLOW TEST:10.577 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:05:23.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 21:05:23.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aff2922e-6700-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-p8cz2" to be "success or failure" Mar 15 21:05:23.847: INFO: Pod "downwardapi-volume-aff2922e-6700-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 54.858685ms Mar 15 21:05:25.869: INFO: Pod "downwardapi-volume-aff2922e-6700-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077078633s Mar 15 21:05:27.894: INFO: Pod "downwardapi-volume-aff2922e-6700-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102622824s STEP: Saw pod success Mar 15 21:05:27.894: INFO: Pod "downwardapi-volume-aff2922e-6700-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:05:27.897: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-aff2922e-6700-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 21:05:27.918: INFO: Waiting for pod downwardapi-volume-aff2922e-6700-11ea-9ccf-0242ac110012 to disappear Mar 15 21:05:27.922: INFO: Pod downwardapi-volume-aff2922e-6700-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:05:27.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p8cz2" for this suite. Mar 15 21:05:33.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:05:34.074: INFO: namespace: e2e-tests-projected-p8cz2, resource: bindings, ignored listing per whitelist Mar 15 21:05:34.116: INFO: namespace e2e-tests-projected-p8cz2 deletion completed in 6.190673235s • [SLOW TEST:10.411 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:05:34.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-8txgn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8txgn to expose endpoints map[] Mar 15 21:05:35.206: INFO: Get endpoints failed (23.221695ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 15 21:05:36.210: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8txgn exposes endpoints map[] (1.026872047s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-8txgn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8txgn to expose endpoints map[pod1:[100]] Mar 15 21:05:40.334: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8txgn exposes endpoints map[pod1:[100]] (4.118534287s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-8txgn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8txgn to expose endpoints map[pod1:[100] pod2:[101]] Mar 15 21:05:45.626: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8txgn exposes endpoints map[pod1:[100] pod2:[101]] (5.287001968s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-8txgn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8txgn to expose endpoints map[pod2:[101]] Mar 15 21:05:46.679: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8txgn exposes endpoints map[pod2:[101]] (1.04896769s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-8txgn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8txgn to expose endpoints map[] Mar 15 21:05:47.716: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8txgn exposes endpoints map[] (1.032900016s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:05:47.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-8txgn" for this suite. Mar 15 21:05:53.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:05:54.007: INFO: namespace: e2e-tests-services-8txgn, resource: bindings, ignored listing per whitelist Mar 15 21:05:54.080: INFO: namespace e2e-tests-services-8txgn deletion completed in 6.102834822s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:19.964 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:05:54.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:05:54.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-52cj8" for this suite. Mar 15 21:06:00.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:06:00.812: INFO: namespace: e2e-tests-kubelet-test-52cj8, resource: bindings, ignored listing per whitelist Mar 15 21:06:01.039: INFO: namespace e2e-tests-kubelet-test-52cj8 deletion completed in 6.386624519s • [SLOW TEST:6.958 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:06:01.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 15 21:06:01.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:01.512: INFO: stderr: "" Mar 15 21:06:01.512: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 15 21:06:01.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:01.701: INFO: stderr: "" Mar 15 21:06:01.701: INFO: stdout: "update-demo-nautilus-5n6h9 update-demo-nautilus-hjzgh " Mar 15 21:06:01.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5n6h9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:01.919: INFO: stderr: "" Mar 15 21:06:01.919: INFO: stdout: "" Mar 15 21:06:01.919: INFO: update-demo-nautilus-5n6h9 is created but not running Mar 15 21:06:06.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:07.022: INFO: stderr: "" Mar 15 21:06:07.022: INFO: stdout: "update-demo-nautilus-5n6h9 update-demo-nautilus-hjzgh " Mar 15 21:06:07.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5n6h9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:07.117: INFO: stderr: "" Mar 15 21:06:07.118: INFO: stdout: "true" Mar 15 21:06:07.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5n6h9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:07.248: INFO: stderr: "" Mar 15 21:06:07.248: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 21:06:07.248: INFO: validating pod update-demo-nautilus-5n6h9 Mar 15 21:06:07.252: INFO: got data: { "image": "nautilus.jpg" } Mar 15 21:06:07.252: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 21:06:07.252: INFO: update-demo-nautilus-5n6h9 is verified up and running Mar 15 21:06:07.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzgh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:07.350: INFO: stderr: "" Mar 15 21:06:07.350: INFO: stdout: "true" Mar 15 21:06:07.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzgh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:07.449: INFO: stderr: "" Mar 15 21:06:07.449: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 21:06:07.449: INFO: validating pod update-demo-nautilus-hjzgh Mar 15 21:06:07.452: INFO: got data: { "image": "nautilus.jpg" } Mar 15 21:06:07.452: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 21:06:07.452: INFO: update-demo-nautilus-hjzgh is verified up and running STEP: scaling down the replication controller Mar 15 21:06:07.455: INFO: scanned /root for discovery docs: Mar 15 21:06:07.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:08.581: INFO: stderr: "" Mar 15 21:06:08.581: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 15 21:06:08.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:08.675: INFO: stderr: "" Mar 15 21:06:08.675: INFO: stdout: "update-demo-nautilus-5n6h9 update-demo-nautilus-hjzgh " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 15 21:06:13.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:13.785: INFO: stderr: "" Mar 15 21:06:13.785: INFO: stdout: "update-demo-nautilus-5n6h9 update-demo-nautilus-hjzgh " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 15 21:06:18.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:19.128: INFO: stderr: "" Mar 15 21:06:19.128: INFO: stdout: "update-demo-nautilus-5n6h9 update-demo-nautilus-hjzgh " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 15 21:06:24.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:24.237: INFO: stderr: "" Mar 15 21:06:24.237: INFO: stdout: "update-demo-nautilus-hjzgh " Mar 15 21:06:24.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzgh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:24.330: INFO: stderr: "" Mar 15 21:06:24.330: INFO: stdout: "true" Mar 15 21:06:24.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzgh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:24.422: INFO: stderr: "" Mar 15 21:06:24.422: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 21:06:24.422: INFO: validating pod update-demo-nautilus-hjzgh Mar 15 21:06:24.425: INFO: got data: { "image": "nautilus.jpg" } Mar 15 21:06:24.425: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 21:06:24.425: INFO: update-demo-nautilus-hjzgh is verified up and running STEP: scaling up the replication controller Mar 15 21:06:24.427: INFO: scanned /root for discovery docs: Mar 15 21:06:24.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:25.828: INFO: stderr: "" Mar 15 21:06:25.828: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 15 21:06:25.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:25.932: INFO: stderr: "" Mar 15 21:06:25.932: INFO: stdout: "update-demo-nautilus-hjzgh update-demo-nautilus-mm5nl " Mar 15 21:06:25.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzgh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:26.048: INFO: stderr: "" Mar 15 21:06:26.048: INFO: stdout: "true" Mar 15 21:06:26.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzgh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:26.155: INFO: stderr: "" Mar 15 21:06:26.155: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 21:06:26.155: INFO: validating pod update-demo-nautilus-hjzgh Mar 15 21:06:26.159: INFO: got data: { "image": "nautilus.jpg" } Mar 15 21:06:26.159: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 21:06:26.159: INFO: update-demo-nautilus-hjzgh is verified up and running Mar 15 21:06:26.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm5nl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:26.389: INFO: stderr: "" Mar 15 21:06:26.389: INFO: stdout: "" Mar 15 21:06:26.389: INFO: update-demo-nautilus-mm5nl is created but not running Mar 15 21:06:31.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:31.492: INFO: stderr: "" Mar 15 21:06:31.492: INFO: stdout: "update-demo-nautilus-hjzgh update-demo-nautilus-mm5nl " Mar 15 21:06:31.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzgh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:31.591: INFO: stderr: "" Mar 15 21:06:31.591: INFO: stdout: "true" Mar 15 21:06:31.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzgh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:31.690: INFO: stderr: "" Mar 15 21:06:31.690: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 21:06:31.690: INFO: validating pod update-demo-nautilus-hjzgh Mar 15 21:06:31.694: INFO: got data: { "image": "nautilus.jpg" } Mar 15 21:06:31.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 21:06:31.694: INFO: update-demo-nautilus-hjzgh is verified up and running Mar 15 21:06:31.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm5nl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:31.780: INFO: stderr: "" Mar 15 21:06:31.780: INFO: stdout: "true" Mar 15 21:06:31.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm5nl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:31.873: INFO: stderr: "" Mar 15 21:06:31.873: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 21:06:31.873: INFO: validating pod update-demo-nautilus-mm5nl Mar 15 21:06:31.878: INFO: got data: { "image": "nautilus.jpg" } Mar 15 21:06:31.878: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 21:06:31.878: INFO: update-demo-nautilus-mm5nl is verified up and running STEP: using delete to clean up resources Mar 15 21:06:31.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:31.986: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 21:06:31.986: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 15 21:06:31.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-sshmp' Mar 15 21:06:32.094: INFO: stderr: "No resources found.\n" Mar 15 21:06:32.094: INFO: stdout: "" Mar 15 21:06:32.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-sshmp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 15 21:06:32.186: INFO: stderr: "" Mar 15 21:06:32.186: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:06:32.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sshmp" for this suite. Mar 15 21:06:54.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:06:54.453: INFO: namespace: e2e-tests-kubectl-sshmp, resource: bindings, ignored listing per whitelist Mar 15 21:06:54.492: INFO: namespace e2e-tests-kubectl-sshmp deletion completed in 22.303171988s • [SLOW TEST:53.453 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:06:54.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 15 21:06:54.614: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:06:54.617: INFO: Number of nodes with available pods: 0 Mar 15 21:06:54.617: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:06:55.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:06:55.626: INFO: Number of nodes with available pods: 0 Mar 15 21:06:55.626: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:06:56.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:06:56.654: INFO: Number of nodes with available pods: 0 Mar 15 21:06:56.654: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:06:57.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:06:57.625: INFO: Number of nodes with available pods: 0 Mar 15 21:06:57.625: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:06:58.621: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:06:58.624: INFO: Number of nodes with available pods: 0 Mar 15 21:06:58.624: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:06:59.699: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:06:59.702: INFO: Number of nodes with available pods: 2 Mar 15 21:06:59.702: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 15 21:06:59.903: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:06:59.907: INFO: Number of nodes with available pods: 1 Mar 15 21:06:59.907: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:00.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:00.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:00.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:01.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:01.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:01.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:02.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:02.915: INFO: Number of nodes with available pods: 1 Mar 15 21:07:02.915: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:03.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:03.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:03.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:04.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:04.915: INFO: Number of nodes with available pods: 1 Mar 15 21:07:04.915: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:05.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:05.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:05.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:06.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:06.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:06.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:07.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:07.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:07.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:08.913: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:08.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:08.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:09.913: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:09.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:09.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:10.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:10.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:10.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:11.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:11.915: INFO: Number of nodes with available pods: 1 Mar 15 21:07:11.915: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:12.921: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:12.925: INFO: Number of nodes with available pods: 1 Mar 15 21:07:12.925: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:13.913: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:13.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:13.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:14.912: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:14.916: INFO: Number of nodes with available pods: 1 Mar 15 21:07:14.916: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:15.913: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:15.917: INFO: Number of nodes with available pods: 1 Mar 15 21:07:15.917: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:07:16.911: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:07:16.914: INFO: Number of nodes with available pods: 2 Mar 15 21:07:16.914: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rhhgs, will wait for the garbage collector to delete the pods Mar 15 21:07:16.976: INFO: Deleting DaemonSet.extensions daemon-set took: 6.341726ms Mar 15 21:07:17.077: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.24197ms Mar 15 21:07:21.780: INFO: Number of nodes with available pods: 0 Mar 15 21:07:21.780: INFO: Number of running nodes: 0, number of available pods: 0 Mar 15 21:07:21.783: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rhhgs/daemonsets","resourceVersion":"25026"},"items":null} Mar 15 21:07:21.785: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rhhgs/pods","resourceVersion":"25026"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:07:21.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-rhhgs" for this suite. Mar 15 21:07:27.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:07:27.953: INFO: namespace: e2e-tests-daemonsets-rhhgs, resource: bindings, ignored listing per whitelist Mar 15 21:07:28.015: INFO: namespace e2e-tests-daemonsets-rhhgs deletion completed in 6.219179202s • [SLOW TEST:33.523 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:07:28.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 21:07:28.209: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa1af89e-6700-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-9lq26" to be "success or failure" Mar 15 21:07:28.213: INFO: Pod "downwardapi-volume-fa1af89e-6700-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066004ms Mar 15 21:07:30.216: INFO: Pod "downwardapi-volume-fa1af89e-6700-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007350076s Mar 15 21:07:32.220: INFO: Pod "downwardapi-volume-fa1af89e-6700-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011544699s STEP: Saw pod success Mar 15 21:07:32.220: INFO: Pod "downwardapi-volume-fa1af89e-6700-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:07:32.223: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-fa1af89e-6700-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 21:07:32.257: INFO: Waiting for pod downwardapi-volume-fa1af89e-6700-11ea-9ccf-0242ac110012 to disappear Mar 15 21:07:32.273: INFO: Pod downwardapi-volume-fa1af89e-6700-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:07:32.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9lq26" for this suite. Mar 15 21:07:38.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:07:38.304: INFO: namespace: e2e-tests-projected-9lq26, resource: bindings, ignored listing per whitelist Mar 15 21:07:38.368: INFO: namespace e2e-tests-projected-9lq26 deletion completed in 6.091722576s • [SLOW TEST:10.352 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:07:38.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-003b25da-6701-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:07:38.562: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0047c480-6701-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-kgz8b" to be "success or failure" Mar 15 21:07:38.759: INFO: Pod "pod-projected-configmaps-0047c480-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 196.974102ms Mar 15 21:07:40.763: INFO: Pod "pod-projected-configmaps-0047c480-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20110592s Mar 15 21:07:42.767: INFO: Pod "pod-projected-configmaps-0047c480-6701-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205208637s STEP: Saw pod success Mar 15 21:07:42.767: INFO: Pod "pod-projected-configmaps-0047c480-6701-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:07:42.770: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-0047c480-6701-11ea-9ccf-0242ac110012 container projected-configmap-volume-test: STEP: delete the pod Mar 15 21:07:42.922: INFO: Waiting for pod pod-projected-configmaps-0047c480-6701-11ea-9ccf-0242ac110012 to disappear Mar 15 21:07:43.094: INFO: Pod pod-projected-configmaps-0047c480-6701-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:07:43.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kgz8b" for this suite. Mar 15 21:07:49.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:07:49.155: INFO: namespace: e2e-tests-projected-kgz8b, resource: bindings, ignored listing per whitelist Mar 15 21:07:49.241: INFO: namespace e2e-tests-projected-kgz8b deletion completed in 6.143827936s • [SLOW TEST:10.873 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:07:49.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:07:49.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2gxcc" for this suite. Mar 15 21:07:55.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:07:55.468: INFO: namespace: e2e-tests-services-2gxcc, resource: bindings, ignored listing per whitelist Mar 15 21:07:55.484: INFO: namespace e2e-tests-services-2gxcc deletion completed in 6.1318798s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.243 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:07:55.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:07:55.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-q9j9s" for this suite. Mar 15 21:08:30.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:08:30.039: INFO: namespace: e2e-tests-pods-q9j9s, resource: bindings, ignored listing per whitelist Mar 15 21:08:30.103: INFO: namespace e2e-tests-pods-q9j9s deletion completed in 34.330606915s • [SLOW TEST:34.619 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:08:30.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0315 21:08:42.043156 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 21:08:42.043: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:08:42.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wgtjn" for this suite. Mar 15 21:08:52.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:08:52.116: INFO: namespace: e2e-tests-gc-wgtjn, resource: bindings, ignored listing per whitelist Mar 15 21:08:52.132: INFO: namespace e2e-tests-gc-wgtjn deletion completed in 10.085876738s • [SLOW TEST:22.028 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:08:52.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 21:08:52.499: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c56f22f-6701-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-jhgvl" to be "success or failure" Mar 15 21:08:52.509: INFO: Pod "downwardapi-volume-2c56f22f-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 9.731358ms Mar 15 21:08:54.513: INFO: Pod "downwardapi-volume-2c56f22f-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014356899s Mar 15 21:08:56.742: INFO: Pod "downwardapi-volume-2c56f22f-6701-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.242683574s STEP: Saw pod success Mar 15 21:08:56.742: INFO: Pod "downwardapi-volume-2c56f22f-6701-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:08:56.745: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2c56f22f-6701-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 21:08:56.792: INFO: Waiting for pod downwardapi-volume-2c56f22f-6701-11ea-9ccf-0242ac110012 to disappear Mar 15 21:08:56.963: INFO: Pod downwardapi-volume-2c56f22f-6701-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:08:56.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jhgvl" for this suite. Mar 15 21:09:05.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:09:05.071: INFO: namespace: e2e-tests-downward-api-jhgvl, resource: bindings, ignored listing per whitelist Mar 15 21:09:05.106: INFO: namespace e2e-tests-downward-api-jhgvl deletion completed in 8.139773789s • [SLOW TEST:12.975 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:09:05.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:09:13.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-fwjcr" for this suite. Mar 15 21:09:19.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:09:19.438: INFO: namespace: e2e-tests-emptydir-wrapper-fwjcr, resource: bindings, ignored listing per whitelist Mar 15 21:09:19.498: INFO: namespace e2e-tests-emptydir-wrapper-fwjcr deletion completed in 6.114843745s • [SLOW TEST:14.391 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:09:19.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 15 21:09:19.650: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-rdg6d,SelfLink:/api/v1/namespaces/e2e-tests-watch-rdg6d/configmaps/e2e-watch-test-watch-closed,UID:3c86d2a5-6701-11ea-99e8-0242ac110002,ResourceVersion:25599,Generation:0,CreationTimestamp:2020-03-15 21:09:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 15 21:09:19.650: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-rdg6d,SelfLink:/api/v1/namespaces/e2e-tests-watch-rdg6d/configmaps/e2e-watch-test-watch-closed,UID:3c86d2a5-6701-11ea-99e8-0242ac110002,ResourceVersion:25600,Generation:0,CreationTimestamp:2020-03-15 21:09:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 15 21:09:19.662: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-rdg6d,SelfLink:/api/v1/namespaces/e2e-tests-watch-rdg6d/configmaps/e2e-watch-test-watch-closed,UID:3c86d2a5-6701-11ea-99e8-0242ac110002,ResourceVersion:25601,Generation:0,CreationTimestamp:2020-03-15 21:09:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 15 21:09:19.662: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-rdg6d,SelfLink:/api/v1/namespaces/e2e-tests-watch-rdg6d/configmaps/e2e-watch-test-watch-closed,UID:3c86d2a5-6701-11ea-99e8-0242ac110002,ResourceVersion:25602,Generation:0,CreationTimestamp:2020-03-15 21:09:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:09:19.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-rdg6d" for this suite. Mar 15 21:09:25.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:09:26.011: INFO: namespace: e2e-tests-watch-rdg6d, resource: bindings, ignored listing per whitelist Mar 15 21:09:26.041: INFO: namespace e2e-tests-watch-rdg6d deletion completed in 6.376188429s • [SLOW TEST:6.543 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:09:26.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-406b4329-6701-11ea-9ccf-0242ac110012 STEP: Creating secret with name s-test-opt-upd-406b4395-6701-11ea-9ccf-0242ac110012 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-406b4329-6701-11ea-9ccf-0242ac110012 STEP: Updating secret s-test-opt-upd-406b4395-6701-11ea-9ccf-0242ac110012 STEP: Creating secret with name s-test-opt-create-406b43b4-6701-11ea-9ccf-0242ac110012 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:09:36.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pchx9" for this suite. Mar 15 21:10:00.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:10:00.561: INFO: namespace: e2e-tests-secrets-pchx9, resource: bindings, ignored listing per whitelist Mar 15 21:10:00.574: INFO: namespace e2e-tests-secrets-pchx9 deletion completed in 24.100394176s • [SLOW TEST:34.533 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:10:00.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Mar 15 21:10:00.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-sbn7g run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 15 21:10:06.356: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0315 21:10:06.281039 2517 log.go:172] (0xc0007782c0) (0xc000357900) Create stream\nI0315 21:10:06.281095 2517 log.go:172] (0xc0007782c0) (0xc000357900) Stream added, broadcasting: 1\nI0315 21:10:06.284696 2517 log.go:172] (0xc0007782c0) Reply frame received for 1\nI0315 21:10:06.284740 2517 log.go:172] (0xc0007782c0) (0xc0003579a0) Create stream\nI0315 21:10:06.284757 2517 log.go:172] (0xc0007782c0) (0xc0003579a0) Stream added, broadcasting: 3\nI0315 21:10:06.286974 2517 log.go:172] (0xc0007782c0) Reply frame received for 3\nI0315 21:10:06.287006 2517 log.go:172] (0xc0007782c0) (0xc000357a40) Create stream\nI0315 21:10:06.287022 2517 log.go:172] (0xc0007782c0) (0xc000357a40) Stream added, broadcasting: 5\nI0315 21:10:06.287804 2517 log.go:172] (0xc0007782c0) Reply frame received for 5\nI0315 21:10:06.287861 2517 log.go:172] (0xc0007782c0) (0xc000a46000) Create stream\nI0315 21:10:06.287884 2517 log.go:172] (0xc0007782c0) (0xc000a46000) Stream added, broadcasting: 7\nI0315 21:10:06.288596 2517 log.go:172] (0xc0007782c0) Reply frame received for 7\nI0315 21:10:06.289267 2517 log.go:172] (0xc0003579a0) (3) Writing data frame\nI0315 21:10:06.289430 2517 log.go:172] (0xc0003579a0) (3) Writing data frame\nI0315 21:10:06.291102 2517 log.go:172] (0xc0007782c0) Data frame received for 5\nI0315 21:10:06.291122 2517 log.go:172] (0xc000357a40) (5) Data frame handling\nI0315 21:10:06.291135 2517 log.go:172] (0xc000357a40) (5) Data frame sent\nI0315 21:10:06.291574 2517 log.go:172] (0xc0007782c0) Data frame received for 5\nI0315 21:10:06.291589 2517 log.go:172] (0xc000357a40) (5) Data frame handling\nI0315 21:10:06.291601 2517 log.go:172] (0xc000357a40) (5) Data frame sent\nI0315 21:10:06.320239 2517 log.go:172] (0xc0007782c0) Data frame received for 5\nI0315 21:10:06.320272 2517 log.go:172] (0xc000357a40) (5) Data frame handling\nI0315 21:10:06.320395 2517 log.go:172] (0xc0007782c0) Data frame received for 7\nI0315 21:10:06.320430 2517 log.go:172] (0xc000a46000) (7) Data frame handling\nI0315 21:10:06.320786 2517 log.go:172] (0xc0007782c0) Data frame received for 1\nI0315 21:10:06.320808 2517 log.go:172] (0xc000357900) (1) Data frame handling\nI0315 21:10:06.320884 2517 log.go:172] (0xc000357900) (1) Data frame sent\nI0315 21:10:06.320904 2517 log.go:172] (0xc0007782c0) (0xc000357900) Stream removed, broadcasting: 1\nI0315 21:10:06.320983 2517 log.go:172] (0xc0007782c0) (0xc000357900) Stream removed, broadcasting: 1\nI0315 21:10:06.321033 2517 log.go:172] (0xc0007782c0) (0xc0003579a0) Stream removed, broadcasting: 3\nI0315 21:10:06.321066 2517 log.go:172] (0xc0007782c0) (0xc000357a40) Stream removed, broadcasting: 5\nI0315 21:10:06.321101 2517 log.go:172] (0xc0007782c0) (0xc000a46000) Stream removed, broadcasting: 7\nI0315 21:10:06.321283 2517 log.go:172] (0xc0007782c0) Go away received\n" Mar 15 21:10:06.356: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:10:08.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sbn7g" for this suite. Mar 15 21:10:14.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:10:14.466: INFO: namespace: e2e-tests-kubectl-sbn7g, resource: bindings, ignored listing per whitelist Mar 15 21:10:14.481: INFO: namespace e2e-tests-kubectl-sbn7g deletion completed in 6.081048824s • [SLOW TEST:13.906 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:10:14.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5d773d0c-6701-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 21:10:15.478: INFO: Waiting up to 5m0s for pod "pod-secrets-5dc424bf-6701-11ea-9ccf-0242ac110012" in namespace "e2e-tests-secrets-vpmlk" to be "success or failure" Mar 15 21:10:15.587: INFO: Pod "pod-secrets-5dc424bf-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 109.46049ms Mar 15 21:10:17.592: INFO: Pod "pod-secrets-5dc424bf-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114135325s Mar 15 21:10:19.875: INFO: Pod "pod-secrets-5dc424bf-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397569279s Mar 15 21:10:21.879: INFO: Pod "pod-secrets-5dc424bf-6701-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.401712274s STEP: Saw pod success Mar 15 21:10:21.879: INFO: Pod "pod-secrets-5dc424bf-6701-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:10:21.882: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5dc424bf-6701-11ea-9ccf-0242ac110012 container secret-volume-test: STEP: delete the pod Mar 15 21:10:21.937: INFO: Waiting for pod pod-secrets-5dc424bf-6701-11ea-9ccf-0242ac110012 to disappear Mar 15 21:10:21.942: INFO: Pod pod-secrets-5dc424bf-6701-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:10:21.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vpmlk" for this suite. Mar 15 21:10:28.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:10:28.295: INFO: namespace: e2e-tests-secrets-vpmlk, resource: bindings, ignored listing per whitelist Mar 15 21:10:28.296: INFO: namespace e2e-tests-secrets-vpmlk deletion completed in 6.351838984s STEP: Destroying namespace "e2e-tests-secret-namespace-2t2tc" for this suite. Mar 15 21:10:34.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:10:34.390: INFO: namespace: e2e-tests-secret-namespace-2t2tc, resource: bindings, ignored listing per whitelist Mar 15 21:10:34.405: INFO: namespace e2e-tests-secret-namespace-2t2tc deletion completed in 6.108556593s • [SLOW TEST:19.924 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:10:34.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-69415da1-6701-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:10:34.795: INFO: Waiting up to 5m0s for pod "pod-configmaps-694318de-6701-11ea-9ccf-0242ac110012" in namespace "e2e-tests-configmap-fcm97" to be "success or failure" Mar 15 21:10:34.805: INFO: Pod "pod-configmaps-694318de-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 9.76721ms Mar 15 21:10:36.893: INFO: Pod "pod-configmaps-694318de-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097657009s Mar 15 21:10:38.897: INFO: Pod "pod-configmaps-694318de-6701-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 4.101821207s Mar 15 21:10:40.901: INFO: Pod "pod-configmaps-694318de-6701-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105921939s STEP: Saw pod success Mar 15 21:10:40.901: INFO: Pod "pod-configmaps-694318de-6701-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:10:40.904: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-694318de-6701-11ea-9ccf-0242ac110012 container configmap-volume-test: STEP: delete the pod Mar 15 21:10:41.255: INFO: Waiting for pod pod-configmaps-694318de-6701-11ea-9ccf-0242ac110012 to disappear Mar 15 21:10:41.480: INFO: Pod pod-configmaps-694318de-6701-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:10:41.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fcm97" for this suite. Mar 15 21:10:47.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:10:47.596: INFO: namespace: e2e-tests-configmap-fcm97, resource: bindings, ignored listing per whitelist Mar 15 21:10:47.642: INFO: namespace e2e-tests-configmap-fcm97 deletion completed in 6.15341725s • [SLOW TEST:13.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:10:47.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 15 21:10:47.757: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 15 21:10:52.760: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:10:54.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-bt8sc" for this suite. Mar 15 21:11:02.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:11:02.534: INFO: namespace: e2e-tests-replication-controller-bt8sc, resource: bindings, ignored listing per whitelist Mar 15 21:11:02.555: INFO: namespace e2e-tests-replication-controller-bt8sc deletion completed in 8.423806142s • [SLOW TEST:14.913 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:11:02.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-zcp6 STEP: Creating a pod to test atomic-volume-subpath Mar 15 21:11:02.693: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zcp6" in namespace "e2e-tests-subpath-5486m" to be "success or failure" Mar 15 21:11:02.712: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.13561ms Mar 15 21:11:04.716: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022007237s Mar 15 21:11:06.738: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0445362s Mar 15 21:11:08.810: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=true. Elapsed: 6.116142201s Mar 15 21:11:10.814: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=false. Elapsed: 8.120664897s Mar 15 21:11:13.319: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=false. Elapsed: 10.62517206s Mar 15 21:11:15.323: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=false. Elapsed: 12.629031001s Mar 15 21:11:17.327: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=false. Elapsed: 14.633579434s Mar 15 21:11:19.332: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=false. Elapsed: 16.638269827s Mar 15 21:11:21.335: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=false. Elapsed: 18.641357334s Mar 15 21:11:23.338: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=false. Elapsed: 20.644877207s Mar 15 21:11:25.342: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=false. Elapsed: 22.6481376s Mar 15 21:11:27.347: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Running", Reason="", readiness=false. Elapsed: 24.653145516s Mar 15 21:11:29.505: INFO: Pod "pod-subpath-test-projected-zcp6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.811087855s STEP: Saw pod success Mar 15 21:11:29.505: INFO: Pod "pod-subpath-test-projected-zcp6" satisfied condition "success or failure" Mar 15 21:11:29.508: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-zcp6 container test-container-subpath-projected-zcp6: STEP: delete the pod Mar 15 21:11:30.129: INFO: Waiting for pod pod-subpath-test-projected-zcp6 to disappear Mar 15 21:11:30.173: INFO: Pod pod-subpath-test-projected-zcp6 no longer exists STEP: Deleting pod pod-subpath-test-projected-zcp6 Mar 15 21:11:30.173: INFO: Deleting pod "pod-subpath-test-projected-zcp6" in namespace "e2e-tests-subpath-5486m" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:11:30.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-5486m" for this suite. Mar 15 21:11:38.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:11:38.365: INFO: namespace: e2e-tests-subpath-5486m, resource: bindings, ignored listing per whitelist Mar 15 21:11:38.417: INFO: namespace e2e-tests-subpath-5486m deletion completed in 8.238107568s • [SLOW TEST:35.861 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:11:38.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0315 21:12:09.907242 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 21:12:09.907: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:12:09.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-khjqc" for this suite. Mar 15 21:12:15.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:12:16.409: INFO: namespace: e2e-tests-gc-khjqc, resource: bindings, ignored listing per whitelist Mar 15 21:12:16.428: INFO: namespace e2e-tests-gc-khjqc deletion completed in 6.517630217s • [SLOW TEST:38.010 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:12:16.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 21:12:16.532: INFO: Creating deployment "nginx-deployment" Mar 15 21:12:16.561: INFO: Waiting for observed generation 1 Mar 15 21:12:18.699: INFO: Waiting for all required pods to come up Mar 15 21:12:18.702: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 15 21:12:28.805: INFO: Waiting for deployment "nginx-deployment" to complete Mar 15 21:12:28.811: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 15 21:12:28.816: INFO: Updating deployment nginx-deployment Mar 15 21:12:28.816: INFO: Waiting for observed generation 2 Mar 15 21:12:31.193: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 15 21:12:31.198: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 15 21:12:31.239: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 15 21:12:31.503: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 15 21:12:31.503: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 15 21:12:31.505: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 15 21:12:31.508: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 15 21:12:31.508: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 15 21:12:31.514: INFO: Updating deployment nginx-deployment Mar 15 21:12:31.514: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 15 21:12:31.649: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 15 21:12:31.661: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 15 21:12:32.129: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rvtj8/deployments/nginx-deployment,UID:a5f8069b-6701-11ea-99e8-0242ac110002,ResourceVersion:26432,Generation:3,CreationTimestamp:2020-03-15 21:12:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-03-15 21:12:31 +0000 UTC 2020-03-15 21:12:16 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-03-15 21:12:31 +0000 UTC 2020-03-15 21:12:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 15 21:12:32.320: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rvtj8/replicasets/nginx-deployment-5c98f8fb5,UID:ad4a520f-6701-11ea-99e8-0242ac110002,ResourceVersion:26476,Generation:3,CreationTimestamp:2020-03-15 21:12:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a5f8069b-6701-11ea-99e8-0242ac110002 0xc0019a9b27 0xc0019a9b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 21:12:32.320: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 15 21:12:32.320: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rvtj8/replicasets/nginx-deployment-85ddf47c5d,UID:a6113152-6701-11ea-99e8-0242ac110002,ResourceVersion:26475,Generation:3,CreationTimestamp:2020-03-15 21:12:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a5f8069b-6701-11ea-99e8-0242ac110002 0xc0019a9c07 0xc0019a9c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 15 21:12:32.374: INFO: Pod "nginx-deployment-5c98f8fb5-26lvr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-26lvr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-26lvr,UID:aefaaa91-6701-11ea-99e8-0242ac110002,ResourceVersion:26436,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000c78ed7 0xc000c78ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c79030} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c79050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.374: INFO: Pod "nginx-deployment-5c98f8fb5-6wrqt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6wrqt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-6wrqt,UID:af216e6b-6701-11ea-99e8-0242ac110002,ResourceVersion:26472,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000c791a0 0xc000c791a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c792a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c792c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.374: INFO: Pod "nginx-deployment-5c98f8fb5-7g22n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7g22n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-7g22n,UID:af256307-6701-11ea-99e8-0242ac110002,ResourceVersion:26478,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000c79490 0xc000c79491}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c79510} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c79530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.374: INFO: Pod "nginx-deployment-5c98f8fb5-9knkr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9knkr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-9knkr,UID:ad75fbcd-6701-11ea-99e8-0242ac110002,ResourceVersion:26393,Generation:0,CreationTimestamp:2020-03-15 21:12:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000c795a0 0xc000c795a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c799f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c79a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 21:12:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.374: INFO: Pod "nginx-deployment-5c98f8fb5-b2v4j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b2v4j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-b2v4j,UID:ad70d9cd-6701-11ea-99e8-0242ac110002,ResourceVersion:26386,Generation:0,CreationTimestamp:2020-03-15 21:12:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000c79ad0 0xc000c79ad1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c79cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c79d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 21:12:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.374: INFO: Pod "nginx-deployment-5c98f8fb5-csrxz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-csrxz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-csrxz,UID:ad75f277-6701-11ea-99e8-0242ac110002,ResourceVersion:26392,Generation:0,CreationTimestamp:2020-03-15 21:12:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000bf8810 0xc000bf8811}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bf93e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bf9490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-15 21:12:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.375: INFO: Pod "nginx-deployment-5c98f8fb5-ctxqx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ctxqx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-ctxqx,UID:af213b6c-6701-11ea-99e8-0242ac110002,ResourceVersion:26467,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000bf9780 0xc000bf9781}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bf9880} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bf9910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.375: INFO: Pod "nginx-deployment-5c98f8fb5-dnfk5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dnfk5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-dnfk5,UID:af216524-6701-11ea-99e8-0242ac110002,ResourceVersion:26473,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000bf9a10 0xc000bf9a11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bf9ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bf9b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.375: INFO: Pod "nginx-deployment-5c98f8fb5-h7sww" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h7sww,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-h7sww,UID:aefc6d5f-6701-11ea-99e8-0242ac110002,ResourceVersion:26445,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000bf9b70 0xc000bf9b71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b22050} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b22070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.375: INFO: Pod "nginx-deployment-5c98f8fb5-mtmlr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mtmlr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-mtmlr,UID:ae2cd705-6701-11ea-99e8-0242ac110002,ResourceVersion:26413,Generation:0,CreationTimestamp:2020-03-15 21:12:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000b220e0 0xc000b220e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b22570} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b22590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-15 21:12:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.375: INFO: Pod "nginx-deployment-5c98f8fb5-wt6mz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wt6mz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-wt6mz,UID:ae4fb9fb-6701-11ea-99e8-0242ac110002,ResourceVersion:26416,Generation:0,CreationTimestamp:2020-03-15 21:12:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000b22700 0xc000b22701}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b22790} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b227b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 21:12:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.375: INFO: Pod "nginx-deployment-5c98f8fb5-zrkq5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zrkq5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-zrkq5,UID:aefc7236-6701-11ea-99e8-0242ac110002,ResourceVersion:26446,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000b228e0 0xc000b228e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b22960} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b22980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.375: INFO: Pod "nginx-deployment-5c98f8fb5-zvjwb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zvjwb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-5c98f8fb5-zvjwb,UID:af21778e-6701-11ea-99e8-0242ac110002,ResourceVersion:26470,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ad4a520f-6701-11ea-99e8-0242ac110002 0xc000b22a70 0xc000b22a71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b22af0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b22b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.375: INFO: Pod "nginx-deployment-85ddf47c5d-2lk6h" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2lk6h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-2lk6h,UID:a62e3923-6701-11ea-99e8-0242ac110002,ResourceVersion:26328,Generation:0,CreationTimestamp:2020-03-15 21:12:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b22b80 0xc000b22b81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b22d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b22d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.92,StartTime:2020-03-15 21:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 21:12:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1bb73389d9b8173879ce5ca8d52b00202a71a655e1d0f2c1bfa41db30dbb4cf3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.375: INFO: Pod "nginx-deployment-85ddf47c5d-2q77x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2q77x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-2q77x,UID:aefc857b-6701-11ea-99e8-0242ac110002,ResourceVersion:26447,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b22e10 0xc000b22e11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b22f90} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b22fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.376: INFO: Pod "nginx-deployment-85ddf47c5d-4f48w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4f48w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-4f48w,UID:a6178a48-6701-11ea-99e8-0242ac110002,ResourceVersion:26312,Generation:0,CreationTimestamp:2020-03-15 21:12:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23080 0xc000b23081}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23250} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b23270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.84,StartTime:2020-03-15 21:12:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 21:12:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6b4635c314f21fad20f1e06bea950a829741eaea996530071d2d2748a060c717}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.376: INFO: Pod "nginx-deployment-85ddf47c5d-5mdqt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5mdqt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-5mdqt,UID:aefaa2d2-6701-11ea-99e8-0242ac110002,ResourceVersion:26481,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23350 0xc000b23351}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b233c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b233e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 21:12:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.376: INFO: Pod "nginx-deployment-85ddf47c5d-5t46k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5t46k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-5t46k,UID:aefc9395-6701-11ea-99e8-0242ac110002,ResourceVersion:26459,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b234c0 0xc000b234c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23530} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b23550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.376: INFO: Pod "nginx-deployment-85ddf47c5d-6dbls" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6dbls,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-6dbls,UID:a64599cb-6701-11ea-99e8-0242ac110002,ResourceVersion:26350,Generation:0,CreationTimestamp:2020-03-15 21:12:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b235c0 0xc000b235c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23630} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b23650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.93,StartTime:2020-03-15 21:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 21:12:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b294d3ee0752e41848e665b4bada0d0f1ce336823fc5fa1936ee5db4de4065d8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.376: INFO: Pod "nginx-deployment-85ddf47c5d-8fqj5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8fqj5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-8fqj5,UID:af217802-6701-11ea-99e8-0242ac110002,ResourceVersion:26471,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23710 0xc000b23711}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23780} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b237a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.376: INFO: Pod "nginx-deployment-85ddf47c5d-8g56v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8g56v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-8g56v,UID:af218924-6701-11ea-99e8-0242ac110002,ResourceVersion:26468,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23810 0xc000b23811}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23880} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b238a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.376: INFO: Pod "nginx-deployment-85ddf47c5d-956cw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-956cw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-956cw,UID:aeeb880e-6701-11ea-99e8-0242ac110002,ResourceVersion:26483,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23910 0xc000b23911}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23980} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b239a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-15 21:12:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.377: INFO: Pod "nginx-deployment-85ddf47c5d-9frfx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9frfx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-9frfx,UID:a626f5d1-6701-11ea-99e8-0242ac110002,ResourceVersion:26317,Generation:0,CreationTimestamp:2020-03-15 21:12:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23a50 0xc000b23a51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b23ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.85,StartTime:2020-03-15 21:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 21:12:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://171fc9d75b68ddc2d2d41d4d1067d37fd77a20bbcfe05b5d76157850837aea3b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.377: INFO: Pod "nginx-deployment-85ddf47c5d-b5dz6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b5dz6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-b5dz6,UID:aefc8832-6701-11ea-99e8-0242ac110002,ResourceVersion:26448,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23ba0 0xc000b23ba1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23c10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b23c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.377: INFO: Pod "nginx-deployment-85ddf47c5d-f586z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f586z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-f586z,UID:aefaaf56-6701-11ea-99e8-0242ac110002,ResourceVersion:26442,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23ca0 0xc000b23ca1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b23d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.377: INFO: Pod "nginx-deployment-85ddf47c5d-k2cp5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k2cp5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-k2cp5,UID:a626f0d5-6701-11ea-99e8-0242ac110002,ResourceVersion:26321,Generation:0,CreationTimestamp:2020-03-15 21:12:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23da0 0xc000b23da1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b23e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b23e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.90,StartTime:2020-03-15 21:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 21:12:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1337c9b4754e854dd85394c86b6d4e6dff8572f45bdd9fccc9cfab3cfd5852c4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.377: INFO: Pod "nginx-deployment-85ddf47c5d-l5twc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l5twc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-l5twc,UID:af217e19-6701-11ea-99e8-0242ac110002,ResourceVersion:26464,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000b23ef0 0xc000b23ef1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000a8e010} {node.kubernetes.io/unreachable Exists NoExecute 0xc000a8e090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.377: INFO: Pod "nginx-deployment-85ddf47c5d-llvnk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-llvnk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-llvnk,UID:a62e35f2-6701-11ea-99e8-0242ac110002,ResourceVersion:26343,Generation:0,CreationTimestamp:2020-03-15 21:12:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000a8e400 0xc000a8e401}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000a8ebc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000a8ed40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.86,StartTime:2020-03-15 21:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 21:12:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ce96a514fec59c043d2719257460dd0454f92d6e4e48945c29ab0fca67349cd2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.378: INFO: Pod "nginx-deployment-85ddf47c5d-nfg2n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nfg2n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-nfg2n,UID:aefc8d63-6701-11ea-99e8-0242ac110002,ResourceVersion:26452,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000a8f020 0xc000a8f021}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000a8f100} {node.kubernetes.io/unreachable Exists NoExecute 0xc000a8f120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.378: INFO: Pod "nginx-deployment-85ddf47c5d-pc48q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pc48q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-pc48q,UID:af217ae2-6701-11ea-99e8-0242ac110002,ResourceVersion:26469,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000a8f300 0xc000a8f301}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000a8f590} {node.kubernetes.io/unreachable Exists NoExecute 0xc000a8f800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.378: INFO: Pod "nginx-deployment-85ddf47c5d-qt6l8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qt6l8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-qt6l8,UID:a62e42d2-6701-11ea-99e8-0242ac110002,ResourceVersion:26346,Generation:0,CreationTimestamp:2020-03-15 21:12:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc000a8fcc0 0xc000a8fcc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000a8fe80} {node.kubernetes.io/unreachable Exists NoExecute 0xc000a8ff50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.91,StartTime:2020-03-15 21:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 21:12:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://14782a46e0fb13589f046a810ba597b659e46b1368fb5807a90b4bd7ac8bf0c5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.378: INFO: Pod "nginx-deployment-85ddf47c5d-w2ktp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w2ktp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-w2ktp,UID:a62e361e-6701-11ea-99e8-0242ac110002,ResourceVersion:26347,Generation:0,CreationTimestamp:2020-03-15 21:12:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc001034a00 0xc001034a01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001034ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001034b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.88,StartTime:2020-03-15 21:12:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 21:12:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://70fd6f6a648e9a2e2ea31240f5404b355492ca6e94a1fede9f4c51cdf1d605d5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 21:12:32.378: INFO: Pod "nginx-deployment-85ddf47c5d-wl6nb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wl6nb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rvtj8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rvtj8/pods/nginx-deployment-85ddf47c5d-wl6nb,UID:af215d15-6701-11ea-99e8-0242ac110002,ResourceVersion:26466,Generation:0,CreationTimestamp:2020-03-15 21:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a6113152-6701-11ea-99e8-0242ac110002 0xc001034d50 0xc001034d51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rkgwv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgwv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rkgwv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001034e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001034e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:12:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:12:32.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-rvtj8" for this suite. Mar 15 21:12:58.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:12:58.801: INFO: namespace: e2e-tests-deployment-rvtj8, resource: bindings, ignored listing per whitelist Mar 15 21:12:58.931: INFO: namespace e2e-tests-deployment-rvtj8 deletion completed in 26.379347779s • [SLOW TEST:42.503 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:12:58.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-bf8bb581-6701-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 21:12:59.580: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-8wkbp" to be "success or failure" Mar 15 21:12:59.672: INFO: Pod "pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 92.243133ms Mar 15 21:13:01.676: INFO: Pod "pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096444275s Mar 15 21:13:03.680: INFO: Pod "pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10000637s Mar 15 21:13:05.745: INFO: Pod "pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 6.165314125s Mar 15 21:13:07.749: INFO: Pod "pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 8.169011411s Mar 15 21:13:09.752: INFO: Pod "pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 10.172108199s Mar 15 21:13:11.883: INFO: Pod "pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.303402307s STEP: Saw pod success Mar 15 21:13:11.883: INFO: Pod "pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:13:11.887: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012 container projected-secret-volume-test: STEP: delete the pod Mar 15 21:13:12.220: INFO: Waiting for pod pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012 to disappear Mar 15 21:13:12.234: INFO: Pod pod-projected-secrets-bf8e50cd-6701-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:13:12.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8wkbp" for this suite. Mar 15 21:13:18.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:13:18.307: INFO: namespace: e2e-tests-projected-8wkbp, resource: bindings, ignored listing per whitelist Mar 15 21:13:18.325: INFO: namespace e2e-tests-projected-8wkbp deletion completed in 6.086277914s • [SLOW TEST:19.393 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:13:18.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-r8hj9 Mar 15 21:13:22.706: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-r8hj9 STEP: checking the pod's current state and verifying that restartCount is present Mar 15 21:13:22.708: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:17:23.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-r8hj9" for this suite. Mar 15 21:17:29.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:17:29.743: INFO: namespace: e2e-tests-container-probe-r8hj9, resource: bindings, ignored listing per whitelist Mar 15 21:17:29.797: INFO: namespace e2e-tests-container-probe-r8hj9 deletion completed in 6.099369408s • [SLOW TEST:251.472 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:17:29.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 15 21:17:34.452: INFO: Successfully updated pod "labelsupdate60c107d2-6702-11ea-9ccf-0242ac110012" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:17:36.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gfwvx" for this suite. Mar 15 21:17:58.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:17:58.696: INFO: namespace: e2e-tests-downward-api-gfwvx, resource: bindings, ignored listing per whitelist Mar 15 21:17:58.715: INFO: namespace e2e-tests-downward-api-gfwvx deletion completed in 22.185397562s • [SLOW TEST:28.916 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:17:58.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 15 21:17:58.833: INFO: Waiting up to 5m0s for pod "pod-71fac768-6702-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-dc9z4" to be "success or failure" Mar 15 21:17:58.838: INFO: Pod "pod-71fac768-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 5.122041ms Mar 15 21:18:00.842: INFO: Pod "pod-71fac768-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009025554s Mar 15 21:18:02.846: INFO: Pod "pod-71fac768-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013213789s Mar 15 21:18:04.851: INFO: Pod "pod-71fac768-6702-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017589277s STEP: Saw pod success Mar 15 21:18:04.851: INFO: Pod "pod-71fac768-6702-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:18:04.854: INFO: Trying to get logs from node hunter-worker pod pod-71fac768-6702-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 21:18:04.912: INFO: Waiting for pod pod-71fac768-6702-11ea-9ccf-0242ac110012 to disappear Mar 15 21:18:04.971: INFO: Pod pod-71fac768-6702-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:18:04.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dc9z4" for this suite. Mar 15 21:18:11.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:18:11.054: INFO: namespace: e2e-tests-emptydir-dc9z4, resource: bindings, ignored listing per whitelist Mar 15 21:18:11.130: INFO: namespace e2e-tests-emptydir-dc9z4 deletion completed in 6.155623249s • [SLOW TEST:12.415 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:18:11.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 15 21:18:18.654: INFO: 0 pods remaining Mar 15 21:18:18.654: INFO: 0 pods has nil DeletionTimestamp Mar 15 21:18:18.654: INFO: Mar 15 21:18:18.944: INFO: 0 pods remaining Mar 15 21:18:18.944: INFO: 0 pods has nil DeletionTimestamp Mar 15 21:18:18.944: INFO: STEP: Gathering metrics W0315 21:18:20.302116 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 21:18:20.302: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:18:20.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wc7nt" for this suite. Mar 15 21:18:26.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:18:26.454: INFO: namespace: e2e-tests-gc-wc7nt, resource: bindings, ignored listing per whitelist Mar 15 21:18:26.478: INFO: namespace e2e-tests-gc-wc7nt deletion completed in 6.172033521s • [SLOW TEST:15.348 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:18:26.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-8286711e-6702-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:18:26.585: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-828817e3-6702-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-bpfjt" to be "success or failure" Mar 15 21:18:26.589: INFO: Pod "pod-projected-configmaps-828817e3-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625056ms Mar 15 21:18:28.592: INFO: Pod "pod-projected-configmaps-828817e3-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007187161s Mar 15 21:18:30.596: INFO: Pod "pod-projected-configmaps-828817e3-6702-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01114076s STEP: Saw pod success Mar 15 21:18:30.596: INFO: Pod "pod-projected-configmaps-828817e3-6702-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:18:30.599: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-828817e3-6702-11ea-9ccf-0242ac110012 container projected-configmap-volume-test: STEP: delete the pod Mar 15 21:18:30.628: INFO: Waiting for pod pod-projected-configmaps-828817e3-6702-11ea-9ccf-0242ac110012 to disappear Mar 15 21:18:30.637: INFO: Pod pod-projected-configmaps-828817e3-6702-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:18:30.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bpfjt" for this suite. Mar 15 21:18:36.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:18:36.857: INFO: namespace: e2e-tests-projected-bpfjt, resource: bindings, ignored listing per whitelist Mar 15 21:18:36.880: INFO: namespace e2e-tests-projected-bpfjt deletion completed in 6.230304976s • [SLOW TEST:10.401 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:18:36.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 15 21:18:41.841: INFO: Successfully updated pod "pod-update-88e42f0b-6702-11ea-9ccf-0242ac110012" STEP: verifying the updated pod is in kubernetes Mar 15 21:18:41.857: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:18:41.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gw9bm" for this suite. Mar 15 21:19:03.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:19:03.912: INFO: namespace: e2e-tests-pods-gw9bm, resource: bindings, ignored listing per whitelist Mar 15 21:19:03.989: INFO: namespace e2e-tests-pods-gw9bm deletion completed in 22.128528188s • [SLOW TEST:27.109 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:19:03.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 21:19:04.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z66hn' Mar 15 21:19:07.447: INFO: stderr: "" Mar 15 21:19:07.447: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 15 21:19:12.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z66hn -o json' Mar 15 21:19:12.610: INFO: stderr: "" Mar 15 21:19:12.610: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-15T21:19:07Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-z66hn\",\n \"resourceVersion\": \"27809\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-z66hn/pods/e2e-test-nginx-pod\",\n \"uid\": \"9ae1fa94-6702-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-jcvvn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-jcvvn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-jcvvn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-15T21:19:07Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-15T21:19:10Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-15T21:19:10Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-15T21:19:07Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://6c4b6e4fa9126d33166e06dabf5657f5265b967b0bd08dc93688d3833dfec57d\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-15T21:19:09Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.114\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-15T21:19:07Z\"\n }\n}\n" STEP: replace the image in the pod Mar 15 21:19:12.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-z66hn' Mar 15 21:19:12.995: INFO: stderr: "" Mar 15 21:19:12.995: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Mar 15 21:19:13.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z66hn' Mar 15 21:19:21.738: INFO: stderr: "" Mar 15 21:19:21.738: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:19:21.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z66hn" for this suite. Mar 15 21:19:27.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:19:27.862: INFO: namespace: e2e-tests-kubectl-z66hn, resource: bindings, ignored listing per whitelist Mar 15 21:19:27.867: INFO: namespace e2e-tests-kubectl-z66hn deletion completed in 6.117023864s • [SLOW TEST:23.877 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:19:27.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 15 21:19:28.015: INFO: Waiting up to 5m0s for pod "downward-api-a72380bc-6702-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-jdmkr" to be "success or failure" Mar 15 21:19:28.018: INFO: Pod "downward-api-a72380bc-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.859641ms Mar 15 21:19:30.022: INFO: Pod "downward-api-a72380bc-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007539681s Mar 15 21:19:32.026: INFO: Pod "downward-api-a72380bc-6702-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011849161s STEP: Saw pod success Mar 15 21:19:32.027: INFO: Pod "downward-api-a72380bc-6702-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:19:32.030: INFO: Trying to get logs from node hunter-worker pod downward-api-a72380bc-6702-11ea-9ccf-0242ac110012 container dapi-container: STEP: delete the pod Mar 15 21:19:32.063: INFO: Waiting for pod downward-api-a72380bc-6702-11ea-9ccf-0242ac110012 to disappear Mar 15 21:19:32.075: INFO: Pod downward-api-a72380bc-6702-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:19:32.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jdmkr" for this suite. Mar 15 21:19:38.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:19:38.134: INFO: namespace: e2e-tests-downward-api-jdmkr, resource: bindings, ignored listing per whitelist Mar 15 21:19:38.168: INFO: namespace e2e-tests-downward-api-jdmkr deletion completed in 6.089866321s • [SLOW TEST:10.301 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:19:38.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 15 21:19:38.273: INFO: PodSpec: initContainers in spec.initContainers Mar 15 21:20:26.808: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ad441a3f-6702-11ea-9ccf-0242ac110012", GenerateName:"", Namespace:"e2e-tests-init-container-7r8qn", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-7r8qn/pods/pod-init-ad441a3f-6702-11ea-9ccf-0242ac110012", UID:"ad48713a-6702-11ea-99e8-0242ac110002", ResourceVersion:"28034", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719903978, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"273860195"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-glwkm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001c2d780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-glwkm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-glwkm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-glwkm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e62bd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021cb7a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e62c60)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e62c80)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001e62c88), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e62c8c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719903978, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719903978, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719903978, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719903978, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.112", StartTime:(*v1.Time)(0xc000ddcae0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0018b3030)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0018b30a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f82bf4c4fe8906f76ca5acd2fdee6d75cbcc5eea93ca7983ed4d170f2014a13a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ddcc20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ddcc00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:20:26.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7r8qn" for this suite. Mar 15 21:20:48.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:20:48.946: INFO: namespace: e2e-tests-init-container-7r8qn, resource: bindings, ignored listing per whitelist Mar 15 21:20:48.968: INFO: namespace e2e-tests-init-container-7r8qn deletion completed in 22.12679141s • [SLOW TEST:70.800 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:20:48.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Mar 15 21:20:49.070: INFO: Waiting up to 5m0s for pod "client-containers-d7743315-6702-11ea-9ccf-0242ac110012" in namespace "e2e-tests-containers-c29kb" to be "success or failure" Mar 15 21:20:49.073: INFO: Pod "client-containers-d7743315-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.774623ms Mar 15 21:20:51.086: INFO: Pod "client-containers-d7743315-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016127616s Mar 15 21:20:53.090: INFO: Pod "client-containers-d7743315-6702-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020451404s STEP: Saw pod success Mar 15 21:20:53.090: INFO: Pod "client-containers-d7743315-6702-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:20:53.093: INFO: Trying to get logs from node hunter-worker pod client-containers-d7743315-6702-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 21:20:53.160: INFO: Waiting for pod client-containers-d7743315-6702-11ea-9ccf-0242ac110012 to disappear Mar 15 21:20:53.173: INFO: Pod client-containers-d7743315-6702-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:20:53.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-c29kb" for this suite. Mar 15 21:20:59.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:20:59.230: INFO: namespace: e2e-tests-containers-c29kb, resource: bindings, ignored listing per whitelist Mar 15 21:20:59.268: INFO: namespace e2e-tests-containers-c29kb deletion completed in 6.092385468s • [SLOW TEST:10.300 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:20:59.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 21:20:59.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-mhd7b' Mar 15 21:20:59.454: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 15 21:20:59.454: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 15 21:20:59.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-mhd7b' Mar 15 21:20:59.590: INFO: stderr: "" Mar 15 21:20:59.590: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:20:59.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mhd7b" for this suite. Mar 15 21:21:21.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:21:21.667: INFO: namespace: e2e-tests-kubectl-mhd7b, resource: bindings, ignored listing per whitelist Mar 15 21:21:21.684: INFO: namespace e2e-tests-kubectl-mhd7b deletion completed in 22.090386906s • [SLOW TEST:22.415 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:21:21.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-eaf9f11f-6702-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 21:21:21.834: INFO: Waiting up to 5m0s for pod "pod-secrets-eafa9da2-6702-11ea-9ccf-0242ac110012" in namespace "e2e-tests-secrets-bn9nh" to be "success or failure" Mar 15 21:21:21.847: INFO: Pod "pod-secrets-eafa9da2-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 12.748576ms Mar 15 21:21:23.851: INFO: Pod "pod-secrets-eafa9da2-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016559407s Mar 15 21:21:25.855: INFO: Pod "pod-secrets-eafa9da2-6702-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020725838s STEP: Saw pod success Mar 15 21:21:25.855: INFO: Pod "pod-secrets-eafa9da2-6702-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:21:25.858: INFO: Trying to get logs from node hunter-worker pod pod-secrets-eafa9da2-6702-11ea-9ccf-0242ac110012 container secret-volume-test: STEP: delete the pod Mar 15 21:21:25.921: INFO: Waiting for pod pod-secrets-eafa9da2-6702-11ea-9ccf-0242ac110012 to disappear Mar 15 21:21:25.924: INFO: Pod pod-secrets-eafa9da2-6702-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:21:25.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bn9nh" for this suite. Mar 15 21:21:31.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:21:32.008: INFO: namespace: e2e-tests-secrets-bn9nh, resource: bindings, ignored listing per whitelist Mar 15 21:21:32.031: INFO: namespace e2e-tests-secrets-bn9nh deletion completed in 6.102259674s • [SLOW TEST:10.346 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:21:32.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-f123b730-6702-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 21:21:32.164: INFO: Waiting up to 5m0s for pod "pod-secrets-f125329a-6702-11ea-9ccf-0242ac110012" in namespace "e2e-tests-secrets-fnlzt" to be "success or failure" Mar 15 21:21:32.180: INFO: Pod "pod-secrets-f125329a-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 16.428176ms Mar 15 21:21:34.184: INFO: Pod "pod-secrets-f125329a-6702-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020437047s Mar 15 21:21:36.189: INFO: Pod "pod-secrets-f125329a-6702-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025474371s STEP: Saw pod success Mar 15 21:21:36.189: INFO: Pod "pod-secrets-f125329a-6702-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:21:36.192: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f125329a-6702-11ea-9ccf-0242ac110012 container secret-volume-test: STEP: delete the pod Mar 15 21:21:36.241: INFO: Waiting for pod pod-secrets-f125329a-6702-11ea-9ccf-0242ac110012 to disappear Mar 15 21:21:36.264: INFO: Pod pod-secrets-f125329a-6702-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:21:36.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fnlzt" for this suite. Mar 15 21:21:42.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:21:42.305: INFO: namespace: e2e-tests-secrets-fnlzt, resource: bindings, ignored listing per whitelist Mar 15 21:21:42.362: INFO: namespace e2e-tests-secrets-fnlzt deletion completed in 6.094075172s • [SLOW TEST:10.331 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:21:42.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-cfwxc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cfwxc to expose endpoints map[] Mar 15 21:21:42.510: INFO: Get endpoints failed (11.183749ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 15 21:21:43.514: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cfwxc exposes endpoints map[] (1.014929216s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-cfwxc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cfwxc to expose endpoints map[pod1:[80]] Mar 15 21:21:46.604: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cfwxc exposes endpoints map[pod1:[80]] (3.084134155s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-cfwxc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cfwxc to expose endpoints map[pod1:[80] pod2:[80]] Mar 15 21:21:49.710: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cfwxc exposes endpoints map[pod1:[80] pod2:[80]] (3.101634082s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-cfwxc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cfwxc to expose endpoints map[pod2:[80]] Mar 15 21:21:49.749: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cfwxc exposes endpoints map[pod2:[80]] (35.508739ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-cfwxc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cfwxc to expose endpoints map[] Mar 15 21:21:50.767: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cfwxc exposes endpoints map[] (1.014171361s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:21:50.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-cfwxc" for this suite. Mar 15 21:22:12.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:22:12.884: INFO: namespace: e2e-tests-services-cfwxc, resource: bindings, ignored listing per whitelist Mar 15 21:22:12.894: INFO: namespace e2e-tests-services-cfwxc deletion completed in 22.081385611s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:30.532 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:22:12.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s8g7p A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s8g7p A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-s8g7p;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s8g7p.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s8g7p.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s8g7p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-s8g7p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s8g7p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-s8g7p.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s8g7p.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 56.124.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.124.56_udp@PTR;check="$$(dig +tcp +noall +answer +search 56.124.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.124.56_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s8g7p A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-s8g7p;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s8g7p A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s8g7p.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-s8g7p.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s8g7p.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s8g7p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s8g7p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s8g7p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s8g7p.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s8g7p.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 56.124.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.124.56_udp@PTR;check="$$(dig +tcp +noall +answer +search 56.124.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.124.56_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 15 21:22:29.139: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.145: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.179: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.181: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.186: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.189: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.192: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.195: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.198: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.226: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:29.244: INFO: Lookups using e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-s8g7p jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p jessie_udp@dns-test-service.e2e-tests-dns-s8g7p.svc jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc] Mar 15 21:22:34.249: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.256: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.296: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.299: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.303: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.306: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.310: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.313: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.316: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.320: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:34.336: INFO: Lookups using e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-s8g7p jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p jessie_udp@dns-test-service.e2e-tests-dns-s8g7p.svc jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc] Mar 15 21:22:39.248: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.255: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.288: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.291: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.294: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.296: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.299: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.301: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.303: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.306: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:39.330: INFO: Lookups using e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-s8g7p jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p jessie_udp@dns-test-service.e2e-tests-dns-s8g7p.svc jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc] Mar 15 21:22:44.249: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.256: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.296: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.299: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.302: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.306: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.309: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.312: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.316: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.319: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:44.340: INFO: Lookups using e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-s8g7p jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-s8g7p jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p jessie_udp@dns-test-service.e2e-tests-dns-s8g7p.svc jessie_tcp@dns-test-service.e2e-tests-dns-s8g7p.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc] Mar 15 21:22:49.333: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:49.336: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc from pod e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012: the server could not find the requested resource (get pods dns-test-098693e7-6703-11ea-9ccf-0242ac110012) Mar 15 21:22:49.359: INFO: Lookups using e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012 failed for: [jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8g7p.svc] Mar 15 21:22:54.338: INFO: DNS probes using e2e-tests-dns-s8g7p/dns-test-098693e7-6703-11ea-9ccf-0242ac110012 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:22:54.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-s8g7p" for this suite. Mar 15 21:23:00.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:23:00.721: INFO: namespace: e2e-tests-dns-s8g7p, resource: bindings, ignored listing per whitelist Mar 15 21:23:00.769: INFO: namespace e2e-tests-dns-s8g7p deletion completed in 6.140995303s • [SLOW TEST:47.876 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:23:00.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 15 21:23:00.893: INFO: Waiting up to 5m0s for pod "downward-api-26082646-6703-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-dslxn" to be "success or failure" Mar 15 21:23:00.909: INFO: Pod "downward-api-26082646-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 16.488394ms Mar 15 21:23:02.913: INFO: Pod "downward-api-26082646-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020547201s Mar 15 21:23:04.917: INFO: Pod "downward-api-26082646-6703-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 4.024303416s Mar 15 21:23:06.921: INFO: Pod "downward-api-26082646-6703-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028544082s STEP: Saw pod success Mar 15 21:23:06.921: INFO: Pod "downward-api-26082646-6703-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:23:06.925: INFO: Trying to get logs from node hunter-worker pod downward-api-26082646-6703-11ea-9ccf-0242ac110012 container dapi-container: STEP: delete the pod Mar 15 21:23:06.947: INFO: Waiting for pod downward-api-26082646-6703-11ea-9ccf-0242ac110012 to disappear Mar 15 21:23:06.950: INFO: Pod downward-api-26082646-6703-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:23:06.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dslxn" for this suite. Mar 15 21:23:12.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:23:13.023: INFO: namespace: e2e-tests-downward-api-dslxn, resource: bindings, ignored listing per whitelist Mar 15 21:23:13.060: INFO: namespace e2e-tests-downward-api-dslxn deletion completed in 6.107026909s • [SLOW TEST:12.291 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:23:13.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 21:23:13.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-r7h6z' Mar 15 21:23:13.256: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 15 21:23:13.256: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Mar 15 21:23:15.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-r7h6z' Mar 15 21:23:15.433: INFO: stderr: "" Mar 15 21:23:15.433: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:23:15.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r7h6z" for this suite. Mar 15 21:24:37.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:24:37.506: INFO: namespace: e2e-tests-kubectl-r7h6z, resource: bindings, ignored listing per whitelist Mar 15 21:24:37.543: INFO: namespace e2e-tests-kubectl-r7h6z deletion completed in 1m22.105528063s • [SLOW TEST:84.483 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:24:37.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-5fb7096d-6703-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:24:37.685: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5fb98653-6703-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-sb2cl" to be "success or failure" Mar 15 21:24:37.689: INFO: Pod "pod-projected-configmaps-5fb98653-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.620321ms Mar 15 21:24:39.692: INFO: Pod "pod-projected-configmaps-5fb98653-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007127269s Mar 15 21:24:41.696: INFO: Pod "pod-projected-configmaps-5fb98653-6703-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010693672s STEP: Saw pod success Mar 15 21:24:41.696: INFO: Pod "pod-projected-configmaps-5fb98653-6703-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:24:41.698: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-5fb98653-6703-11ea-9ccf-0242ac110012 container projected-configmap-volume-test: STEP: delete the pod Mar 15 21:24:41.756: INFO: Waiting for pod pod-projected-configmaps-5fb98653-6703-11ea-9ccf-0242ac110012 to disappear Mar 15 21:24:41.771: INFO: Pod pod-projected-configmaps-5fb98653-6703-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:24:41.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sb2cl" for this suite. Mar 15 21:24:47.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:24:47.813: INFO: namespace: e2e-tests-projected-sb2cl, resource: bindings, ignored listing per whitelist Mar 15 21:24:47.865: INFO: namespace e2e-tests-projected-sb2cl deletion completed in 6.090457028s • [SLOW TEST:10.322 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:24:47.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 15 21:24:48.222: INFO: namespace e2e-tests-kubectl-5x7ks Mar 15 21:24:48.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5x7ks' Mar 15 21:24:48.635: INFO: stderr: "" Mar 15 21:24:48.635: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 15 21:24:49.640: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:24:49.640: INFO: Found 0 / 1 Mar 15 21:24:50.683: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:24:50.683: INFO: Found 0 / 1 Mar 15 21:24:51.639: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:24:51.639: INFO: Found 0 / 1 Mar 15 21:24:52.647: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:24:52.647: INFO: Found 0 / 1 Mar 15 21:24:53.640: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:24:53.640: INFO: Found 0 / 1 Mar 15 21:24:54.640: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:24:54.640: INFO: Found 0 / 1 Mar 15 21:24:55.639: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:24:55.639: INFO: Found 1 / 1 Mar 15 21:24:55.639: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 15 21:24:55.642: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:24:55.642: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 15 21:24:55.642: INFO: wait on redis-master startup in e2e-tests-kubectl-5x7ks Mar 15 21:24:55.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7mchp redis-master --namespace=e2e-tests-kubectl-5x7ks' Mar 15 21:24:55.766: INFO: stderr: "" Mar 15 21:24:55.766: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Mar 21:24:54.543 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Mar 21:24:54.543 # Server started, Redis version 3.2.12\n1:M 15 Mar 21:24:54.543 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Mar 21:24:54.543 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 15 21:24:55.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-5x7ks' Mar 15 21:24:55.905: INFO: stderr: "" Mar 15 21:24:55.905: INFO: stdout: "service/rm2 exposed\n" Mar 15 21:24:55.922: INFO: Service rm2 in namespace e2e-tests-kubectl-5x7ks found. STEP: exposing service Mar 15 21:24:57.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-5x7ks' Mar 15 21:24:58.102: INFO: stderr: "" Mar 15 21:24:58.102: INFO: stdout: "service/rm3 exposed\n" Mar 15 21:24:58.132: INFO: Service rm3 in namespace e2e-tests-kubectl-5x7ks found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:25:00.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5x7ks" for this suite. Mar 15 21:25:24.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:25:24.244: INFO: namespace: e2e-tests-kubectl-5x7ks, resource: bindings, ignored listing per whitelist Mar 15 21:25:24.317: INFO: namespace e2e-tests-kubectl-5x7ks deletion completed in 24.106432997s • [SLOW TEST:36.452 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:25:24.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 15 21:25:24.441: INFO: Waiting up to 5m0s for pod "pod-7b971446-6703-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-bpwbc" to be "success or failure" Mar 15 21:25:24.444: INFO: Pod "pod-7b971446-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.346718ms Mar 15 21:25:26.498: INFO: Pod "pod-7b971446-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056985531s Mar 15 21:25:28.503: INFO: Pod "pod-7b971446-6703-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 4.061681608s Mar 15 21:25:30.506: INFO: Pod "pod-7b971446-6703-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06509409s STEP: Saw pod success Mar 15 21:25:30.506: INFO: Pod "pod-7b971446-6703-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:25:30.508: INFO: Trying to get logs from node hunter-worker pod pod-7b971446-6703-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 21:25:30.638: INFO: Waiting for pod pod-7b971446-6703-11ea-9ccf-0242ac110012 to disappear Mar 15 21:25:30.668: INFO: Pod pod-7b971446-6703-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:25:30.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bpwbc" for this suite. Mar 15 21:25:36.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:25:36.866: INFO: namespace: e2e-tests-emptydir-bpwbc, resource: bindings, ignored listing per whitelist Mar 15 21:25:36.922: INFO: namespace e2e-tests-emptydir-bpwbc deletion completed in 6.249683099s • [SLOW TEST:12.604 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:25:36.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-8315e99e-6703-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 21:25:37.022: INFO: Waiting up to 5m0s for pod "pod-secrets-83178cca-6703-11ea-9ccf-0242ac110012" in namespace "e2e-tests-secrets-6dl8x" to be "success or failure" Mar 15 21:25:37.026: INFO: Pod "pod-secrets-83178cca-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.929203ms Mar 15 21:25:39.030: INFO: Pod "pod-secrets-83178cca-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008513051s Mar 15 21:25:41.034: INFO: Pod "pod-secrets-83178cca-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012923709s Mar 15 21:25:43.039: INFO: Pod "pod-secrets-83178cca-6703-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017313794s STEP: Saw pod success Mar 15 21:25:43.039: INFO: Pod "pod-secrets-83178cca-6703-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:25:43.042: INFO: Trying to get logs from node hunter-worker pod pod-secrets-83178cca-6703-11ea-9ccf-0242ac110012 container secret-volume-test: STEP: delete the pod Mar 15 21:25:43.063: INFO: Waiting for pod pod-secrets-83178cca-6703-11ea-9ccf-0242ac110012 to disappear Mar 15 21:25:43.075: INFO: Pod pod-secrets-83178cca-6703-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:25:43.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6dl8x" for this suite. Mar 15 21:25:49.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:25:49.158: INFO: namespace: e2e-tests-secrets-6dl8x, resource: bindings, ignored listing per whitelist Mar 15 21:25:49.173: INFO: namespace e2e-tests-secrets-6dl8x deletion completed in 6.094334523s • [SLOW TEST:12.250 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:25:49.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9gbh6 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 15 21:25:49.263: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 15 21:26:17.388: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.121:8080/dial?request=hostName&protocol=http&host=10.244.2.120&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9gbh6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 21:26:17.388: INFO: >>> kubeConfig: /root/.kube/config I0315 21:26:17.426201 6 log.go:172] (0xc00299c000) (0xc001c28aa0) Create stream I0315 21:26:17.426234 6 log.go:172] (0xc00299c000) (0xc001c28aa0) Stream added, broadcasting: 1 I0315 21:26:17.429070 6 log.go:172] (0xc00299c000) Reply frame received for 1 I0315 21:26:17.429103 6 log.go:172] (0xc00299c000) (0xc0006e5360) Create stream I0315 21:26:17.429221 6 log.go:172] (0xc00299c000) (0xc0006e5360) Stream added, broadcasting: 3 I0315 21:26:17.430284 6 log.go:172] (0xc00299c000) Reply frame received for 3 I0315 21:26:17.430334 6 log.go:172] (0xc00299c000) (0xc001670320) Create stream I0315 21:26:17.430351 6 log.go:172] (0xc00299c000) (0xc001670320) Stream added, broadcasting: 5 I0315 21:26:17.431225 6 log.go:172] (0xc00299c000) Reply frame received for 5 I0315 21:26:17.503218 6 log.go:172] (0xc00299c000) Data frame received for 3 I0315 21:26:17.503256 6 log.go:172] (0xc0006e5360) (3) Data frame handling I0315 21:26:17.503283 6 log.go:172] (0xc0006e5360) (3) Data frame sent I0315 21:26:17.503374 6 log.go:172] (0xc00299c000) Data frame received for 5 I0315 21:26:17.503426 6 log.go:172] (0xc001670320) (5) Data frame handling I0315 21:26:17.503768 6 log.go:172] (0xc00299c000) Data frame received for 3 I0315 21:26:17.503780 6 log.go:172] (0xc0006e5360) (3) Data frame handling I0315 21:26:17.505847 6 log.go:172] (0xc00299c000) Data frame received for 1 I0315 21:26:17.505885 6 log.go:172] (0xc001c28aa0) (1) Data frame handling I0315 21:26:17.505921 6 log.go:172] (0xc001c28aa0) (1) Data frame sent I0315 21:26:17.505960 6 log.go:172] (0xc00299c000) (0xc001c28aa0) Stream removed, broadcasting: 1 I0315 21:26:17.505997 6 log.go:172] (0xc00299c000) Go away received I0315 21:26:17.506121 6 log.go:172] (0xc00299c000) (0xc001c28aa0) Stream removed, broadcasting: 1 I0315 21:26:17.506153 6 log.go:172] (0xc00299c000) (0xc0006e5360) Stream removed, broadcasting: 3 I0315 21:26:17.506174 6 log.go:172] (0xc00299c000) (0xc001670320) Stream removed, broadcasting: 5 Mar 15 21:26:17.506: INFO: Waiting for endpoints: map[] Mar 15 21:26:17.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.121:8080/dial?request=hostName&protocol=http&host=10.244.1.121&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9gbh6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 21:26:17.510: INFO: >>> kubeConfig: /root/.kube/config I0315 21:26:17.543562 6 log.go:172] (0xc0022302c0) (0xc001670a00) Create stream I0315 21:26:17.543594 6 log.go:172] (0xc0022302c0) (0xc001670a00) Stream added, broadcasting: 1 I0315 21:26:17.545360 6 log.go:172] (0xc0022302c0) Reply frame received for 1 I0315 21:26:17.545392 6 log.go:172] (0xc0022302c0) (0xc001670aa0) Create stream I0315 21:26:17.545400 6 log.go:172] (0xc0022302c0) (0xc001670aa0) Stream added, broadcasting: 3 I0315 21:26:17.546301 6 log.go:172] (0xc0022302c0) Reply frame received for 3 I0315 21:26:17.546341 6 log.go:172] (0xc0022302c0) (0xc001670b40) Create stream I0315 21:26:17.546355 6 log.go:172] (0xc0022302c0) (0xc001670b40) Stream added, broadcasting: 5 I0315 21:26:17.547084 6 log.go:172] (0xc0022302c0) Reply frame received for 5 I0315 21:26:17.598450 6 log.go:172] (0xc0022302c0) Data frame received for 3 I0315 21:26:17.598474 6 log.go:172] (0xc001670aa0) (3) Data frame handling I0315 21:26:17.598490 6 log.go:172] (0xc001670aa0) (3) Data frame sent I0315 21:26:17.598929 6 log.go:172] (0xc0022302c0) Data frame received for 5 I0315 21:26:17.598946 6 log.go:172] (0xc001670b40) (5) Data frame handling I0315 21:26:17.598992 6 log.go:172] (0xc0022302c0) Data frame received for 3 I0315 21:26:17.599029 6 log.go:172] (0xc001670aa0) (3) Data frame handling I0315 21:26:17.600881 6 log.go:172] (0xc0022302c0) Data frame received for 1 I0315 21:26:17.600906 6 log.go:172] (0xc001670a00) (1) Data frame handling I0315 21:26:17.600919 6 log.go:172] (0xc001670a00) (1) Data frame sent I0315 21:26:17.600946 6 log.go:172] (0xc0022302c0) (0xc001670a00) Stream removed, broadcasting: 1 I0315 21:26:17.600964 6 log.go:172] (0xc0022302c0) Go away received I0315 21:26:17.601275 6 log.go:172] (0xc0022302c0) (0xc001670a00) Stream removed, broadcasting: 1 I0315 21:26:17.601384 6 log.go:172] (0xc0022302c0) (0xc001670aa0) Stream removed, broadcasting: 3 I0315 21:26:17.601411 6 log.go:172] (0xc0022302c0) (0xc001670b40) Stream removed, broadcasting: 5 Mar 15 21:26:17.601: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:26:17.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9gbh6" for this suite. Mar 15 21:26:41.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:26:41.699: INFO: namespace: e2e-tests-pod-network-test-9gbh6, resource: bindings, ignored listing per whitelist Mar 15 21:26:41.702: INFO: namespace e2e-tests-pod-network-test-9gbh6 deletion completed in 24.096122301s • [SLOW TEST:52.529 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:26:41.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:26:45.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-cszfl" for this suite. Mar 15 21:26:51.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:26:51.844: INFO: namespace: e2e-tests-kubelet-test-cszfl, resource: bindings, ignored listing per whitelist Mar 15 21:26:51.912: INFO: namespace e2e-tests-kubelet-test-cszfl deletion completed in 6.105399032s • [SLOW TEST:10.210 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:26:51.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-afc94ae0-6703-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 21:26:52.058: INFO: Waiting up to 5m0s for pod "pod-secrets-afcdad76-6703-11ea-9ccf-0242ac110012" in namespace "e2e-tests-secrets-9gzgn" to be "success or failure" Mar 15 21:26:52.063: INFO: Pod "pod-secrets-afcdad76-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 5.421329ms Mar 15 21:26:54.067: INFO: Pod "pod-secrets-afcdad76-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009312767s Mar 15 21:26:56.071: INFO: Pod "pod-secrets-afcdad76-6703-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013083658s STEP: Saw pod success Mar 15 21:26:56.071: INFO: Pod "pod-secrets-afcdad76-6703-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:26:56.073: INFO: Trying to get logs from node hunter-worker pod pod-secrets-afcdad76-6703-11ea-9ccf-0242ac110012 container secret-volume-test: STEP: delete the pod Mar 15 21:26:56.128: INFO: Waiting for pod pod-secrets-afcdad76-6703-11ea-9ccf-0242ac110012 to disappear Mar 15 21:26:56.147: INFO: Pod pod-secrets-afcdad76-6703-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:26:56.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9gzgn" for this suite. Mar 15 21:27:02.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:27:02.200: INFO: namespace: e2e-tests-secrets-9gzgn, resource: bindings, ignored listing per whitelist Mar 15 21:27:02.334: INFO: namespace e2e-tests-secrets-9gzgn deletion completed in 6.183469426s • [SLOW TEST:10.422 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:27:02.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 21:27:02.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Mar 15 21:27:02.659: INFO: stderr: "" Mar 15 21:27:02.659: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Mar 15 21:27:02.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k2dm9' Mar 15 21:27:02.960: INFO: stderr: "" Mar 15 21:27:02.961: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 15 21:27:02.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k2dm9' Mar 15 21:27:03.346: INFO: stderr: "" Mar 15 21:27:03.346: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 15 21:27:04.350: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:27:04.350: INFO: Found 0 / 1 Mar 15 21:27:05.350: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:27:05.350: INFO: Found 0 / 1 Mar 15 21:27:06.351: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:27:06.351: INFO: Found 0 / 1 Mar 15 21:27:07.351: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:27:07.351: INFO: Found 1 / 1 Mar 15 21:27:07.351: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 15 21:27:07.355: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:27:07.355: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 15 21:27:07.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-5rxg5 --namespace=e2e-tests-kubectl-k2dm9' Mar 15 21:27:07.470: INFO: stderr: "" Mar 15 21:27:07.470: INFO: stdout: "Name: redis-master-5rxg5\nNamespace: e2e-tests-kubectl-k2dm9\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Sun, 15 Mar 2020 21:27:03 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.122\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://851de1f2f07ba74c253858bc67e0fb2c3cac5f42b8a09905cf94f3d95052d0df\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 15 Mar 2020 21:27:05 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bskc9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bskc9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bskc9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-k2dm9/redis-master-5rxg5 to hunter-worker2\n Normal Pulled 3s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 2s kubelet, hunter-worker2 Started container\n" Mar 15 21:27:07.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-k2dm9' Mar 15 21:27:07.589: INFO: stderr: "" Mar 15 21:27:07.589: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-k2dm9\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-5rxg5\n" Mar 15 21:27:07.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-k2dm9' Mar 15 21:27:07.700: INFO: stderr: "" Mar 15 21:27:07.700: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-k2dm9\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.101.59.157\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.122:6379\nSession Affinity: None\nEvents: \n" Mar 15 21:27:07.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Mar 15 21:27:07.823: INFO: stderr: "" Mar 15 21:27:07.823: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 15 Mar 2020 21:26:59 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 15 Mar 2020 21:26:59 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 15 Mar 2020 21:26:59 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 15 Mar 2020 21:26:59 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h3m\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 3h3m\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 3h3m\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 3h3m\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h3m\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3h3m\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h3m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 15 21:27:07.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-k2dm9' Mar 15 21:27:07.934: INFO: stderr: "" Mar 15 21:27:07.934: INFO: stdout: "Name: e2e-tests-kubectl-k2dm9\nLabels: e2e-framework=kubectl\n e2e-run=dfc500a4-66f9-11ea-9ccf-0242ac110012\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:27:07.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k2dm9" for this suite. Mar 15 21:27:30.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:27:30.198: INFO: namespace: e2e-tests-kubectl-k2dm9, resource: bindings, ignored listing per whitelist Mar 15 21:27:30.350: INFO: namespace e2e-tests-kubectl-k2dm9 deletion completed in 22.411908533s • [SLOW TEST:28.016 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:27:30.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 21:27:30.519: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.388803ms) Mar 15 21:27:30.523: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.690607ms) Mar 15 21:27:30.527: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.674043ms) Mar 15 21:27:30.530: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.543058ms) Mar 15 21:27:30.534: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.754609ms) Mar 15 21:27:30.537: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.145209ms) Mar 15 21:27:30.540: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.012517ms) Mar 15 21:27:30.544: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.162844ms) Mar 15 21:27:30.547: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.178421ms) Mar 15 21:27:30.550: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.929234ms) Mar 15 21:27:30.553: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.287561ms) Mar 15 21:27:30.556: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.345147ms) Mar 15 21:27:30.560: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.390787ms) Mar 15 21:27:30.563: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.539993ms) Mar 15 21:27:30.567: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.468278ms) Mar 15 21:27:30.668: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 101.339262ms) Mar 15 21:27:30.673: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.080703ms) Mar 15 21:27:30.677: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.474271ms) Mar 15 21:27:30.680: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.288255ms) Mar 15 21:27:30.683: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.709405ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:27:30.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-l22km" for this suite. Mar 15 21:27:36.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:27:36.764: INFO: namespace: e2e-tests-proxy-l22km, resource: bindings, ignored listing per whitelist Mar 15 21:27:36.795: INFO: namespace e2e-tests-proxy-l22km deletion completed in 6.109091941s • [SLOW TEST:6.445 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:27:36.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 15 21:27:41.423: INFO: Successfully updated pod "annotationupdateca8885e7-6703-11ea-9ccf-0242ac110012" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:27:43.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hbvd9" for this suite. Mar 15 21:28:07.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:28:07.798: INFO: namespace: e2e-tests-projected-hbvd9, resource: bindings, ignored listing per whitelist Mar 15 21:28:07.872: INFO: namespace e2e-tests-projected-hbvd9 deletion completed in 24.103981554s • [SLOW TEST:31.077 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:28:07.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-dd0fcfdc-6703-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:28:08.001: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd13da04-6703-11ea-9ccf-0242ac110012" in namespace "e2e-tests-configmap-95v9d" to be "success or failure" Mar 15 21:28:08.023: INFO: Pod "pod-configmaps-dd13da04-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 22.301626ms Mar 15 21:28:10.027: INFO: Pod "pod-configmaps-dd13da04-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026025315s Mar 15 21:28:12.186: INFO: Pod "pod-configmaps-dd13da04-6703-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 4.185525192s Mar 15 21:28:14.190: INFO: Pod "pod-configmaps-dd13da04-6703-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.189250408s STEP: Saw pod success Mar 15 21:28:14.190: INFO: Pod "pod-configmaps-dd13da04-6703-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:28:14.192: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-dd13da04-6703-11ea-9ccf-0242ac110012 container configmap-volume-test: STEP: delete the pod Mar 15 21:28:14.226: INFO: Waiting for pod pod-configmaps-dd13da04-6703-11ea-9ccf-0242ac110012 to disappear Mar 15 21:28:14.238: INFO: Pod pod-configmaps-dd13da04-6703-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:28:14.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-95v9d" for this suite. Mar 15 21:28:20.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:28:20.390: INFO: namespace: e2e-tests-configmap-95v9d, resource: bindings, ignored listing per whitelist Mar 15 21:28:20.406: INFO: namespace e2e-tests-configmap-95v9d deletion completed in 6.164772203s • [SLOW TEST:12.533 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:28:20.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 15 21:28:32.005: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 15 21:28:32.154: INFO: Pod pod-with-prestop-http-hook still exists Mar 15 21:28:34.154: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 15 21:28:34.158: INFO: Pod pod-with-prestop-http-hook still exists Mar 15 21:28:36.154: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 15 21:28:36.158: INFO: Pod pod-with-prestop-http-hook still exists Mar 15 21:28:38.154: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 15 21:28:38.159: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:28:38.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gdqnq" for this suite. Mar 15 21:29:00.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:29:00.230: INFO: namespace: e2e-tests-container-lifecycle-hook-gdqnq, resource: bindings, ignored listing per whitelist Mar 15 21:29:00.288: INFO: namespace e2e-tests-container-lifecycle-hook-gdqnq deletion completed in 22.117426108s • [SLOW TEST:39.882 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:29:00.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 21:29:01.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcc50d44-6703-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-hks5s" to be "success or failure" Mar 15 21:29:01.185: INFO: Pod "downwardapi-volume-fcc50d44-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 23.25078ms Mar 15 21:29:03.190: INFO: Pod "downwardapi-volume-fcc50d44-6703-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027739471s Mar 15 21:29:05.194: INFO: Pod "downwardapi-volume-fcc50d44-6703-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031948352s STEP: Saw pod success Mar 15 21:29:05.194: INFO: Pod "downwardapi-volume-fcc50d44-6703-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:29:05.198: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-fcc50d44-6703-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 21:29:05.231: INFO: Waiting for pod downwardapi-volume-fcc50d44-6703-11ea-9ccf-0242ac110012 to disappear Mar 15 21:29:05.251: INFO: Pod downwardapi-volume-fcc50d44-6703-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:29:05.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hks5s" for this suite. Mar 15 21:29:13.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:29:13.328: INFO: namespace: e2e-tests-projected-hks5s, resource: bindings, ignored listing per whitelist Mar 15 21:29:13.344: INFO: namespace e2e-tests-projected-hks5s deletion completed in 8.08968353s • [SLOW TEST:13.055 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:29:13.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-vk6l9.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-vk6l9.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-vk6l9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-vk6l9.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-vk6l9.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-vk6l9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 15 21:29:19.577: INFO: DNS probes using e2e-tests-dns-vk6l9/dns-test-04186d74-6704-11ea-9ccf-0242ac110012 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:29:19.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-vk6l9" for this suite. Mar 15 21:29:25.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:29:25.698: INFO: namespace: e2e-tests-dns-vk6l9, resource: bindings, ignored listing per whitelist Mar 15 21:29:25.700: INFO: namespace e2e-tests-dns-vk6l9 deletion completed in 6.088971301s • [SLOW TEST:12.355 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:29:25.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 15 21:29:25.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q4lgj' Mar 15 21:29:28.174: INFO: stderr: "" Mar 15 21:29:28.174: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 15 21:29:29.178: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:29:29.178: INFO: Found 0 / 1 Mar 15 21:29:30.178: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:29:30.178: INFO: Found 0 / 1 Mar 15 21:29:31.382: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:29:31.383: INFO: Found 0 / 1 Mar 15 21:29:32.178: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:29:32.178: INFO: Found 0 / 1 Mar 15 21:29:33.178: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:29:33.178: INFO: Found 1 / 1 Mar 15 21:29:33.178: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 15 21:29:33.181: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:29:33.181: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 15 21:29:33.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-czvjm --namespace=e2e-tests-kubectl-q4lgj -p {"metadata":{"annotations":{"x":"y"}}}' Mar 15 21:29:33.290: INFO: stderr: "" Mar 15 21:29:33.290: INFO: stdout: "pod/redis-master-czvjm patched\n" STEP: checking annotations Mar 15 21:29:33.307: INFO: Selector matched 1 pods for map[app:redis] Mar 15 21:29:33.307: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:29:33.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q4lgj" for this suite. Mar 15 21:29:57.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:29:57.344: INFO: namespace: e2e-tests-kubectl-q4lgj, resource: bindings, ignored listing per whitelist Mar 15 21:29:57.400: INFO: namespace e2e-tests-kubectl-q4lgj deletion completed in 24.08889336s • [SLOW TEST:31.700 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:29:57.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 15 21:29:57.598: INFO: Waiting up to 5m0s for pod "pod-1e68cc1f-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-wbqmc" to be "success or failure" Mar 15 21:29:57.613: INFO: Pod "pod-1e68cc1f-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 14.514456ms Mar 15 21:29:59.730: INFO: Pod "pod-1e68cc1f-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131151941s Mar 15 21:30:01.734: INFO: Pod "pod-1e68cc1f-6704-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 4.135105338s Mar 15 21:30:03.738: INFO: Pod "pod-1e68cc1f-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139242776s STEP: Saw pod success Mar 15 21:30:03.738: INFO: Pod "pod-1e68cc1f-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:30:03.741: INFO: Trying to get logs from node hunter-worker pod pod-1e68cc1f-6704-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 21:30:03.809: INFO: Waiting for pod pod-1e68cc1f-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:30:03.815: INFO: Pod pod-1e68cc1f-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:30:03.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wbqmc" for this suite. Mar 15 21:30:09.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:30:09.906: INFO: namespace: e2e-tests-emptydir-wbqmc, resource: bindings, ignored listing per whitelist Mar 15 21:30:09.935: INFO: namespace e2e-tests-emptydir-wbqmc deletion completed in 6.116431109s • [SLOW TEST:12.534 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:30:09.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Mar 15 21:30:14.076: INFO: Pod pod-hostip-25d144f8-6704-11ea-9ccf-0242ac110012 has hostIP: 172.17.0.3 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:30:14.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-sxpbg" for this suite. Mar 15 21:30:36.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:30:36.109: INFO: namespace: e2e-tests-pods-sxpbg, resource: bindings, ignored listing per whitelist Mar 15 21:30:36.165: INFO: namespace e2e-tests-pods-sxpbg deletion completed in 22.084702177s • [SLOW TEST:26.230 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:30:36.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 21:31:02.294: INFO: Container started at 2020-03-15 21:30:38 +0000 UTC, pod became ready at 2020-03-15 21:31:00 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:31:02.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bf2nn" for this suite. Mar 15 21:31:24.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:31:24.337: INFO: namespace: e2e-tests-container-probe-bf2nn, resource: bindings, ignored listing per whitelist Mar 15 21:31:24.393: INFO: namespace e2e-tests-container-probe-bf2nn deletion completed in 22.094383649s • [SLOW TEST:48.228 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:31:24.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-5235c79f-6704-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 21:31:24.519: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-52375ddf-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-bcsxg" to be "success or failure" Mar 15 21:31:24.523: INFO: Pod "pod-projected-secrets-52375ddf-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.921822ms Mar 15 21:31:26.527: INFO: Pod "pod-projected-secrets-52375ddf-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00821311s Mar 15 21:31:28.531: INFO: Pod "pod-projected-secrets-52375ddf-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012391694s STEP: Saw pod success Mar 15 21:31:28.531: INFO: Pod "pod-projected-secrets-52375ddf-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:31:28.534: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-52375ddf-6704-11ea-9ccf-0242ac110012 container projected-secret-volume-test: STEP: delete the pod Mar 15 21:31:28.582: INFO: Waiting for pod pod-projected-secrets-52375ddf-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:31:28.589: INFO: Pod pod-projected-secrets-52375ddf-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:31:28.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bcsxg" for this suite. Mar 15 21:31:34.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:31:34.619: INFO: namespace: e2e-tests-projected-bcsxg, resource: bindings, ignored listing per whitelist Mar 15 21:31:34.688: INFO: namespace e2e-tests-projected-bcsxg deletion completed in 6.095751044s • [SLOW TEST:10.294 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:31:34.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 15 21:31:39.332: INFO: Successfully updated pod "pod-update-activedeadlineseconds-58577dfe-6704-11ea-9ccf-0242ac110012" Mar 15 21:31:39.332: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-58577dfe-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-pods-lqpz9" to be "terminated due to deadline exceeded" Mar 15 21:31:39.339: INFO: Pod "pod-update-activedeadlineseconds-58577dfe-6704-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 7.123918ms Mar 15 21:31:41.343: INFO: Pod "pod-update-activedeadlineseconds-58577dfe-6704-11ea-9ccf-0242ac110012": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010712826s Mar 15 21:31:41.343: INFO: Pod "pod-update-activedeadlineseconds-58577dfe-6704-11ea-9ccf-0242ac110012" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:31:41.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-lqpz9" for this suite. Mar 15 21:31:47.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:31:47.410: INFO: namespace: e2e-tests-pods-lqpz9, resource: bindings, ignored listing per whitelist Mar 15 21:31:47.422: INFO: namespace e2e-tests-pods-lqpz9 deletion completed in 6.075177952s • [SLOW TEST:12.734 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:31:47.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Mar 15 21:31:48.044: INFO: created pod pod-service-account-defaultsa Mar 15 21:31:48.044: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 15 21:31:48.065: INFO: created pod pod-service-account-mountsa Mar 15 21:31:48.065: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 15 21:31:48.107: INFO: created pod pod-service-account-nomountsa Mar 15 21:31:48.107: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 15 21:31:48.118: INFO: created pod pod-service-account-defaultsa-mountspec Mar 15 21:31:48.118: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 15 21:31:48.187: INFO: created pod pod-service-account-mountsa-mountspec Mar 15 21:31:48.187: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 15 21:31:48.196: INFO: created pod pod-service-account-nomountsa-mountspec Mar 15 21:31:48.196: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 15 21:31:48.242: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 15 21:31:48.242: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 15 21:31:48.277: INFO: created pod pod-service-account-mountsa-nomountspec Mar 15 21:31:48.277: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 15 21:31:48.336: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 15 21:31:48.336: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:31:48.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-tmgfw" for this suite. Mar 15 21:32:14.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:32:14.481: INFO: namespace: e2e-tests-svcaccounts-tmgfw, resource: bindings, ignored listing per whitelist Mar 15 21:32:14.503: INFO: namespace e2e-tests-svcaccounts-tmgfw deletion completed in 26.128713171s • [SLOW TEST:27.081 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:32:14.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-700ffeb0-6704-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:32:14.606: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70120cdd-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-p58dp" to be "success or failure" Mar 15 21:32:14.619: INFO: Pod "pod-projected-configmaps-70120cdd-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 12.714757ms Mar 15 21:32:16.629: INFO: Pod "pod-projected-configmaps-70120cdd-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02322834s Mar 15 21:32:18.634: INFO: Pod "pod-projected-configmaps-70120cdd-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027429631s STEP: Saw pod success Mar 15 21:32:18.634: INFO: Pod "pod-projected-configmaps-70120cdd-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:32:18.637: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-70120cdd-6704-11ea-9ccf-0242ac110012 container projected-configmap-volume-test: STEP: delete the pod Mar 15 21:32:18.657: INFO: Waiting for pod pod-projected-configmaps-70120cdd-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:32:18.672: INFO: Pod pod-projected-configmaps-70120cdd-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:32:18.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p58dp" for this suite. Mar 15 21:32:24.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:32:24.765: INFO: namespace: e2e-tests-projected-p58dp, resource: bindings, ignored listing per whitelist Mar 15 21:32:24.778: INFO: namespace e2e-tests-projected-p58dp deletion completed in 6.102177471s • [SLOW TEST:10.275 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:32:24.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 21:32:28.987: INFO: Waiting up to 5m0s for pod "client-envvars-78a5041e-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-pods-wr5sq" to be "success or failure" Mar 15 21:32:29.103: INFO: Pod "client-envvars-78a5041e-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 115.701867ms Mar 15 21:32:31.106: INFO: Pod "client-envvars-78a5041e-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119222135s Mar 15 21:32:33.126: INFO: Pod "client-envvars-78a5041e-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138749643s STEP: Saw pod success Mar 15 21:32:33.126: INFO: Pod "client-envvars-78a5041e-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:32:33.128: INFO: Trying to get logs from node hunter-worker pod client-envvars-78a5041e-6704-11ea-9ccf-0242ac110012 container env3cont: STEP: delete the pod Mar 15 21:32:33.170: INFO: Waiting for pod client-envvars-78a5041e-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:32:33.173: INFO: Pod client-envvars-78a5041e-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:32:33.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wr5sq" for this suite. Mar 15 21:33:23.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:33:23.274: INFO: namespace: e2e-tests-pods-wr5sq, resource: bindings, ignored listing per whitelist Mar 15 21:33:23.295: INFO: namespace e2e-tests-pods-wr5sq deletion completed in 50.118978499s • [SLOW TEST:58.517 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:33:23.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 15 21:33:23.401: INFO: Waiting up to 5m0s for pod "pod-99122c6a-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-dm5v9" to be "success or failure" Mar 15 21:33:23.420: INFO: Pod "pod-99122c6a-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 18.736847ms Mar 15 21:33:25.423: INFO: Pod "pod-99122c6a-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022240553s Mar 15 21:33:27.427: INFO: Pod "pod-99122c6a-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026009608s STEP: Saw pod success Mar 15 21:33:27.427: INFO: Pod "pod-99122c6a-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:33:27.429: INFO: Trying to get logs from node hunter-worker pod pod-99122c6a-6704-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 21:33:27.458: INFO: Waiting for pod pod-99122c6a-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:33:27.474: INFO: Pod pod-99122c6a-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:33:27.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dm5v9" for this suite. Mar 15 21:33:33.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:33:33.498: INFO: namespace: e2e-tests-emptydir-dm5v9, resource: bindings, ignored listing per whitelist Mar 15 21:33:33.582: INFO: namespace e2e-tests-emptydir-dm5v9 deletion completed in 6.105109087s • [SLOW TEST:10.286 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:33:33.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 21:33:33.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f344afe-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-clg8n" to be "success or failure" Mar 15 21:33:33.693: INFO: Pod "downwardapi-volume-9f344afe-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.760119ms Mar 15 21:33:35.697: INFO: Pod "downwardapi-volume-9f344afe-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00801226s Mar 15 21:33:37.702: INFO: Pod "downwardapi-volume-9f344afe-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012217229s STEP: Saw pod success Mar 15 21:33:37.702: INFO: Pod "downwardapi-volume-9f344afe-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:33:37.706: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-9f344afe-6704-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 21:33:37.724: INFO: Waiting for pod downwardapi-volume-9f344afe-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:33:37.729: INFO: Pod downwardapi-volume-9f344afe-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:33:37.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-clg8n" for this suite. Mar 15 21:33:45.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:33:45.806: INFO: namespace: e2e-tests-projected-clg8n, resource: bindings, ignored listing per whitelist Mar 15 21:33:45.837: INFO: namespace e2e-tests-projected-clg8n deletion completed in 8.105416796s • [SLOW TEST:12.255 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:33:45.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Mar 15 21:33:46.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 15 21:33:46.767: INFO: stderr: "" Mar 15 21:33:46.767: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:33:46.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8vtwl" for this suite. Mar 15 21:33:52.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:33:53.057: INFO: namespace: e2e-tests-kubectl-8vtwl, resource: bindings, ignored listing per whitelist Mar 15 21:33:53.060: INFO: namespace e2e-tests-kubectl-8vtwl deletion completed in 6.269082435s • [SLOW TEST:7.222 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:33:53.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 15 21:33:57.690: INFO: Successfully updated pod "annotationupdateaacfc6ba-6704-11ea-9ccf-0242ac110012" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:33:59.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bhlw6" for this suite. Mar 15 21:34:23.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:34:23.868: INFO: namespace: e2e-tests-downward-api-bhlw6, resource: bindings, ignored listing per whitelist Mar 15 21:34:23.885: INFO: namespace e2e-tests-downward-api-bhlw6 deletion completed in 24.169245805s • [SLOW TEST:30.826 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:34:23.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 15 21:34:23.997: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 15 21:34:24.016: INFO: Waiting for terminating namespaces to be deleted... Mar 15 21:34:24.019: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 15 21:34:24.024: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 15 21:34:24.024: INFO: Container kube-proxy ready: true, restart count 0 Mar 15 21:34:24.024: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 21:34:24.024: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 21:34:24.024: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 21:34:24.024: INFO: Container coredns ready: true, restart count 0 Mar 15 21:34:24.024: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 15 21:34:24.031: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 21:34:24.031: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 21:34:24.031: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 21:34:24.031: INFO: Container coredns ready: true, restart count 0 Mar 15 21:34:24.031: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 21:34:24.031: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Mar 15 21:34:24.137: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Mar 15 21:34:24.138: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Mar 15 21:34:24.138: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Mar 15 21:34:24.138: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Mar 15 21:34:24.138: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Mar 15 21:34:24.138: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-bd484085-6704-11ea-9ccf-0242ac110012.15fc97ca52779bde], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-nhkjw/filler-pod-bd484085-6704-11ea-9ccf-0242ac110012 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd484085-6704-11ea-9ccf-0242ac110012.15fc97ca95323f06], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd484085-6704-11ea-9ccf-0242ac110012.15fc97cb249141d8], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd484085-6704-11ea-9ccf-0242ac110012.15fc97cb3313e46f], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd491592-6704-11ea-9ccf-0242ac110012.15fc97ca596a0d4f], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-nhkjw/filler-pod-bd491592-6704-11ea-9ccf-0242ac110012 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd491592-6704-11ea-9ccf-0242ac110012.15fc97cabd6334b5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd491592-6704-11ea-9ccf-0242ac110012.15fc97cb27139f23], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd491592-6704-11ea-9ccf-0242ac110012.15fc97cb35d0ebfe], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fc97cbbfeebf6a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:34:31.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nhkjw" for this suite. Mar 15 21:34:37.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:34:37.473: INFO: namespace: e2e-tests-sched-pred-nhkjw, resource: bindings, ignored listing per whitelist Mar 15 21:34:37.519: INFO: namespace e2e-tests-sched-pred-nhkjw deletion completed in 6.0791409s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.634 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:34:37.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 15 21:34:37.916: INFO: Waiting up to 5m0s for pod "pod-c56a9061-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-jmvzg" to be "success or failure" Mar 15 21:34:37.929: INFO: Pod "pod-c56a9061-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 12.64001ms Mar 15 21:34:39.932: INFO: Pod "pod-c56a9061-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015888784s Mar 15 21:34:41.936: INFO: Pod "pod-c56a9061-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020010766s STEP: Saw pod success Mar 15 21:34:41.936: INFO: Pod "pod-c56a9061-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:34:41.939: INFO: Trying to get logs from node hunter-worker2 pod pod-c56a9061-6704-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 21:34:42.013: INFO: Waiting for pod pod-c56a9061-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:34:42.018: INFO: Pod pod-c56a9061-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:34:42.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jmvzg" for this suite. Mar 15 21:34:48.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:34:48.103: INFO: namespace: e2e-tests-emptydir-jmvzg, resource: bindings, ignored listing per whitelist Mar 15 21:34:48.108: INFO: namespace e2e-tests-emptydir-jmvzg deletion completed in 6.087054429s • [SLOW TEST:10.589 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:34:48.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-cb9e9b93-6704-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 21:34:48.223: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cba240d9-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-xvkff" to be "success or failure" Mar 15 21:34:48.278: INFO: Pod "pod-projected-secrets-cba240d9-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 55.063269ms Mar 15 21:34:50.284: INFO: Pod "pod-projected-secrets-cba240d9-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0601335s Mar 15 21:34:52.287: INFO: Pod "pod-projected-secrets-cba240d9-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063629774s STEP: Saw pod success Mar 15 21:34:52.287: INFO: Pod "pod-projected-secrets-cba240d9-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:34:52.290: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-cba240d9-6704-11ea-9ccf-0242ac110012 container secret-volume-test: STEP: delete the pod Mar 15 21:34:52.351: INFO: Waiting for pod pod-projected-secrets-cba240d9-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:34:52.362: INFO: Pod pod-projected-secrets-cba240d9-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:34:52.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xvkff" for this suite. Mar 15 21:34:58.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:34:58.455: INFO: namespace: e2e-tests-projected-xvkff, resource: bindings, ignored listing per whitelist Mar 15 21:34:58.464: INFO: namespace e2e-tests-projected-xvkff deletion completed in 6.098917812s • [SLOW TEST:10.356 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:34:58.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Mar 15 21:35:02.856: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-d1de5fe3-6704-11ea-9ccf-0242ac110012", GenerateName:"", Namespace:"e2e-tests-pods-whlx2", SelfLink:"/api/v1/namespaces/e2e-tests-pods-whlx2/pods/pod-submit-remove-d1de5fe3-6704-11ea-9ccf-0242ac110012", UID:"d1e54903-6704-11ea-99e8-0242ac110002", ResourceVersion:"30954", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719904898, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"676332756"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wchzc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00127bec0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wchzc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001855b38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021de720), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001855bc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001855be0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001855be8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001855bec)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719904898, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719904902, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719904902, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719904898, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.137", StartTime:(*v1.Time)(0xc000d4c280), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000d4c2a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://631dc94ff36115e2b232610d7b2380f712e80fa3d29a79edb0138a4545777287"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:35:11.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-whlx2" for this suite. Mar 15 21:35:17.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:35:17.829: INFO: namespace: e2e-tests-pods-whlx2, resource: bindings, ignored listing per whitelist Mar 15 21:35:17.861: INFO: namespace e2e-tests-pods-whlx2 deletion completed in 6.0759727s • [SLOW TEST:19.397 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:35:17.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-dd7a3d7a-6704-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:35:18.171: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dd7bd5a5-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-2gzg7" to be "success or failure" Mar 15 21:35:18.297: INFO: Pod "pod-projected-configmaps-dd7bd5a5-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 126.107908ms Mar 15 21:35:20.374: INFO: Pod "pod-projected-configmaps-dd7bd5a5-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203638951s Mar 15 21:35:22.378: INFO: Pod "pod-projected-configmaps-dd7bd5a5-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.207073938s STEP: Saw pod success Mar 15 21:35:22.378: INFO: Pod "pod-projected-configmaps-dd7bd5a5-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:35:22.380: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-dd7bd5a5-6704-11ea-9ccf-0242ac110012 container projected-configmap-volume-test: STEP: delete the pod Mar 15 21:35:22.525: INFO: Waiting for pod pod-projected-configmaps-dd7bd5a5-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:35:22.557: INFO: Pod pod-projected-configmaps-dd7bd5a5-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:35:22.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2gzg7" for this suite. Mar 15 21:35:28.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:35:28.588: INFO: namespace: e2e-tests-projected-2gzg7, resource: bindings, ignored listing per whitelist Mar 15 21:35:28.639: INFO: namespace e2e-tests-projected-2gzg7 deletion completed in 6.078669716s • [SLOW TEST:10.777 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:35:28.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 21:35:28.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3d17485-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-sbcck" to be "success or failure" Mar 15 21:35:28.877: INFO: Pod "downwardapi-volume-e3d17485-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 57.85917ms Mar 15 21:35:30.882: INFO: Pod "downwardapi-volume-e3d17485-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06218921s Mar 15 21:35:32.885: INFO: Pod "downwardapi-volume-e3d17485-6704-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 4.065695907s Mar 15 21:35:34.888: INFO: Pod "downwardapi-volume-e3d17485-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068497467s STEP: Saw pod success Mar 15 21:35:34.888: INFO: Pod "downwardapi-volume-e3d17485-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:35:34.890: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e3d17485-6704-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 21:35:35.023: INFO: Waiting for pod downwardapi-volume-e3d17485-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:35:35.052: INFO: Pod downwardapi-volume-e3d17485-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:35:35.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sbcck" for this suite. Mar 15 21:35:41.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:35:41.202: INFO: namespace: e2e-tests-downward-api-sbcck, resource: bindings, ignored listing per whitelist Mar 15 21:35:41.202: INFO: namespace e2e-tests-downward-api-sbcck deletion completed in 6.145236641s • [SLOW TEST:12.563 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:35:41.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-eb529519-6704-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:35:41.410: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb550620-6704-11ea-9ccf-0242ac110012" in namespace "e2e-tests-configmap-z4dgb" to be "success or failure" Mar 15 21:35:41.426: INFO: Pod "pod-configmaps-eb550620-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 16.024228ms Mar 15 21:35:43.489: INFO: Pod "pod-configmaps-eb550620-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078661948s Mar 15 21:35:45.498: INFO: Pod "pod-configmaps-eb550620-6704-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087919189s Mar 15 21:35:47.501: INFO: Pod "pod-configmaps-eb550620-6704-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091079098s STEP: Saw pod success Mar 15 21:35:47.501: INFO: Pod "pod-configmaps-eb550620-6704-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:35:47.503: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-eb550620-6704-11ea-9ccf-0242ac110012 container configmap-volume-test: STEP: delete the pod Mar 15 21:35:47.607: INFO: Waiting for pod pod-configmaps-eb550620-6704-11ea-9ccf-0242ac110012 to disappear Mar 15 21:35:47.654: INFO: Pod pod-configmaps-eb550620-6704-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:35:47.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-z4dgb" for this suite. Mar 15 21:35:53.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:35:53.718: INFO: namespace: e2e-tests-configmap-z4dgb, resource: bindings, ignored listing per whitelist Mar 15 21:35:53.754: INFO: namespace e2e-tests-configmap-z4dgb deletion completed in 6.096104688s • [SLOW TEST:12.552 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:35:53.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-lz62 STEP: Creating a pod to test atomic-volume-subpath Mar 15 21:35:53.860: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lz62" in namespace "e2e-tests-subpath-s8psj" to be "success or failure" Mar 15 21:35:53.863: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Pending", Reason="", readiness=false. Elapsed: 3.692144ms Mar 15 21:35:55.944: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084097057s Mar 15 21:35:58.046: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186327012s Mar 15 21:36:00.049: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189886951s Mar 15 21:36:02.053: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Running", Reason="", readiness=true. Elapsed: 8.19360346s Mar 15 21:36:04.057: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Running", Reason="", readiness=false. Elapsed: 10.197356064s Mar 15 21:36:06.064: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Running", Reason="", readiness=false. Elapsed: 12.204280617s Mar 15 21:36:08.274: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Running", Reason="", readiness=false. Elapsed: 14.414125631s Mar 15 21:36:10.277: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Running", Reason="", readiness=false. Elapsed: 16.417891275s Mar 15 21:36:12.281: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Running", Reason="", readiness=false. Elapsed: 18.421888672s Mar 15 21:36:14.286: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Running", Reason="", readiness=false. Elapsed: 20.425988967s Mar 15 21:36:16.304: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Running", Reason="", readiness=false. Elapsed: 22.444027091s Mar 15 21:36:18.308: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Running", Reason="", readiness=false. Elapsed: 24.448174354s Mar 15 21:36:20.312: INFO: Pod "pod-subpath-test-secret-lz62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.452221807s STEP: Saw pod success Mar 15 21:36:20.312: INFO: Pod "pod-subpath-test-secret-lz62" satisfied condition "success or failure" Mar 15 21:36:20.315: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-lz62 container test-container-subpath-secret-lz62: STEP: delete the pod Mar 15 21:36:20.347: INFO: Waiting for pod pod-subpath-test-secret-lz62 to disappear Mar 15 21:36:20.355: INFO: Pod pod-subpath-test-secret-lz62 no longer exists STEP: Deleting pod pod-subpath-test-secret-lz62 Mar 15 21:36:20.355: INFO: Deleting pod "pod-subpath-test-secret-lz62" in namespace "e2e-tests-subpath-s8psj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:36:20.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-s8psj" for this suite. Mar 15 21:36:26.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:36:26.379: INFO: namespace: e2e-tests-subpath-s8psj, resource: bindings, ignored listing per whitelist Mar 15 21:36:26.477: INFO: namespace e2e-tests-subpath-s8psj deletion completed in 6.11582143s • [SLOW TEST:32.723 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:36:26.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 15 21:36:33.093: INFO: Successfully updated pod "labelsupdate063dc405-6705-11ea-9ccf-0242ac110012" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:36:35.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2ftlw" for this suite. Mar 15 21:36:57.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:36:57.494: INFO: namespace: e2e-tests-projected-2ftlw, resource: bindings, ignored listing per whitelist Mar 15 21:36:57.517: INFO: namespace e2e-tests-projected-2ftlw deletion completed in 22.133956904s • [SLOW TEST:31.040 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:36:57.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-fdb8m Mar 15 21:37:01.724: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-fdb8m STEP: checking the pod's current state and verifying that restartCount is present Mar 15 21:37:01.727: INFO: Initial restart count of pod liveness-http is 0 Mar 15 21:37:21.767: INFO: Restart count of pod e2e-tests-container-probe-fdb8m/liveness-http is now 1 (20.040681791s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:37:21.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fdb8m" for this suite. Mar 15 21:37:27.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:37:27.846: INFO: namespace: e2e-tests-container-probe-fdb8m, resource: bindings, ignored listing per whitelist Mar 15 21:37:27.866: INFO: namespace e2e-tests-container-probe-fdb8m deletion completed in 6.080021296s • [SLOW TEST:30.349 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:37:27.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-vmfw4/configmap-test-2ae18abc-6705-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:37:28.045: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ae32e1d-6705-11ea-9ccf-0242ac110012" in namespace "e2e-tests-configmap-vmfw4" to be "success or failure" Mar 15 21:37:28.068: INFO: Pod "pod-configmaps-2ae32e1d-6705-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 22.203307ms Mar 15 21:37:30.072: INFO: Pod "pod-configmaps-2ae32e1d-6705-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026308968s Mar 15 21:37:32.075: INFO: Pod "pod-configmaps-2ae32e1d-6705-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029898963s STEP: Saw pod success Mar 15 21:37:32.075: INFO: Pod "pod-configmaps-2ae32e1d-6705-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:37:32.078: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2ae32e1d-6705-11ea-9ccf-0242ac110012 container env-test: STEP: delete the pod Mar 15 21:37:32.110: INFO: Waiting for pod pod-configmaps-2ae32e1d-6705-11ea-9ccf-0242ac110012 to disappear Mar 15 21:37:32.114: INFO: Pod pod-configmaps-2ae32e1d-6705-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:37:32.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vmfw4" for this suite. Mar 15 21:37:38.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:37:38.142: INFO: namespace: e2e-tests-configmap-vmfw4, resource: bindings, ignored listing per whitelist Mar 15 21:37:38.206: INFO: namespace e2e-tests-configmap-vmfw4 deletion completed in 6.088576175s • [SLOW TEST:10.339 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:37:38.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Mar 15 21:37:38.375: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix965785027/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:37:38.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bzjz6" for this suite. Mar 15 21:37:44.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:37:44.543: INFO: namespace: e2e-tests-kubectl-bzjz6, resource: bindings, ignored listing per whitelist Mar 15 21:37:44.585: INFO: namespace e2e-tests-kubectl-bzjz6 deletion completed in 6.088233901s • [SLOW TEST:6.380 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:37:44.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-bcs7r Mar 15 21:37:48.743: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-bcs7r STEP: checking the pod's current state and verifying that restartCount is present Mar 15 21:37:48.746: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:41:50.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bcs7r" for this suite. Mar 15 21:41:56.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:41:56.340: INFO: namespace: e2e-tests-container-probe-bcs7r, resource: bindings, ignored listing per whitelist Mar 15 21:41:56.353: INFO: namespace e2e-tests-container-probe-bcs7r deletion completed in 6.101159225s • [SLOW TEST:251.768 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:41:56.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4hk6l STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 15 21:41:56.475: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 15 21:42:27.193: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.145:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-4hk6l PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 21:42:27.193: INFO: >>> kubeConfig: /root/.kube/config I0315 21:42:27.228676 6 log.go:172] (0xc000a0bad0) (0xc0020e7ea0) Create stream I0315 21:42:27.228724 6 log.go:172] (0xc000a0bad0) (0xc0020e7ea0) Stream added, broadcasting: 1 I0315 21:42:27.231401 6 log.go:172] (0xc000a0bad0) Reply frame received for 1 I0315 21:42:27.231453 6 log.go:172] (0xc000a0bad0) (0xc001a75ea0) Create stream I0315 21:42:27.231478 6 log.go:172] (0xc000a0bad0) (0xc001a75ea0) Stream added, broadcasting: 3 I0315 21:42:27.235418 6 log.go:172] (0xc000a0bad0) Reply frame received for 3 I0315 21:42:27.235463 6 log.go:172] (0xc000a0bad0) (0xc002a7e0a0) Create stream I0315 21:42:27.235480 6 log.go:172] (0xc000a0bad0) (0xc002a7e0a0) Stream added, broadcasting: 5 I0315 21:42:27.236355 6 log.go:172] (0xc000a0bad0) Reply frame received for 5 I0315 21:42:27.296062 6 log.go:172] (0xc000a0bad0) Data frame received for 3 I0315 21:42:27.296098 6 log.go:172] (0xc001a75ea0) (3) Data frame handling I0315 21:42:27.296127 6 log.go:172] (0xc001a75ea0) (3) Data frame sent I0315 21:42:27.296557 6 log.go:172] (0xc000a0bad0) Data frame received for 3 I0315 21:42:27.296589 6 log.go:172] (0xc001a75ea0) (3) Data frame handling I0315 21:42:27.298407 6 log.go:172] (0xc000a0bad0) Data frame received for 5 I0315 21:42:27.298427 6 log.go:172] (0xc002a7e0a0) (5) Data frame handling I0315 21:42:27.299538 6 log.go:172] (0xc000a0bad0) Data frame received for 1 I0315 21:42:27.299554 6 log.go:172] (0xc0020e7ea0) (1) Data frame handling I0315 21:42:27.299562 6 log.go:172] (0xc0020e7ea0) (1) Data frame sent I0315 21:42:27.299576 6 log.go:172] (0xc000a0bad0) (0xc0020e7ea0) Stream removed, broadcasting: 1 I0315 21:42:27.299658 6 log.go:172] (0xc000a0bad0) Go away received I0315 21:42:27.299709 6 log.go:172] (0xc000a0bad0) (0xc0020e7ea0) Stream removed, broadcasting: 1 I0315 21:42:27.299755 6 log.go:172] (0xc000a0bad0) (0xc001a75ea0) Stream removed, broadcasting: 3 I0315 21:42:27.299783 6 log.go:172] (0xc000a0bad0) (0xc002a7e0a0) Stream removed, broadcasting: 5 Mar 15 21:42:27.299: INFO: Found all expected endpoints: [netserver-0] Mar 15 21:42:27.303: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.142:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-4hk6l PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 21:42:27.303: INFO: >>> kubeConfig: /root/.kube/config I0315 21:42:27.331255 6 log.go:172] (0xc000331c30) (0xc00099c0a0) Create stream I0315 21:42:27.331283 6 log.go:172] (0xc000331c30) (0xc00099c0a0) Stream added, broadcasting: 1 I0315 21:42:27.333060 6 log.go:172] (0xc000331c30) Reply frame received for 1 I0315 21:42:27.333105 6 log.go:172] (0xc000331c30) (0xc00099c140) Create stream I0315 21:42:27.333322 6 log.go:172] (0xc000331c30) (0xc00099c140) Stream added, broadcasting: 3 I0315 21:42:27.334367 6 log.go:172] (0xc000331c30) Reply frame received for 3 I0315 21:42:27.334413 6 log.go:172] (0xc000331c30) (0xc001f17400) Create stream I0315 21:42:27.334428 6 log.go:172] (0xc000331c30) (0xc001f17400) Stream added, broadcasting: 5 I0315 21:42:27.335526 6 log.go:172] (0xc000331c30) Reply frame received for 5 I0315 21:42:27.393669 6 log.go:172] (0xc000331c30) Data frame received for 5 I0315 21:42:27.393703 6 log.go:172] (0xc001f17400) (5) Data frame handling I0315 21:42:27.393724 6 log.go:172] (0xc000331c30) Data frame received for 3 I0315 21:42:27.393734 6 log.go:172] (0xc00099c140) (3) Data frame handling I0315 21:42:27.393745 6 log.go:172] (0xc00099c140) (3) Data frame sent I0315 21:42:27.393767 6 log.go:172] (0xc000331c30) Data frame received for 3 I0315 21:42:27.393783 6 log.go:172] (0xc00099c140) (3) Data frame handling I0315 21:42:27.395216 6 log.go:172] (0xc000331c30) Data frame received for 1 I0315 21:42:27.395243 6 log.go:172] (0xc00099c0a0) (1) Data frame handling I0315 21:42:27.395262 6 log.go:172] (0xc00099c0a0) (1) Data frame sent I0315 21:42:27.395279 6 log.go:172] (0xc000331c30) (0xc00099c0a0) Stream removed, broadcasting: 1 I0315 21:42:27.395297 6 log.go:172] (0xc000331c30) Go away received I0315 21:42:27.395385 6 log.go:172] (0xc000331c30) (0xc00099c0a0) Stream removed, broadcasting: 1 I0315 21:42:27.395406 6 log.go:172] (0xc000331c30) (0xc00099c140) Stream removed, broadcasting: 3 I0315 21:42:27.395421 6 log.go:172] (0xc000331c30) (0xc001f17400) Stream removed, broadcasting: 5 Mar 15 21:42:27.395: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:42:27.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-4hk6l" for this suite. Mar 15 21:42:51.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:42:51.492: INFO: namespace: e2e-tests-pod-network-test-4hk6l, resource: bindings, ignored listing per whitelist Mar 15 21:42:51.569: INFO: namespace e2e-tests-pod-network-test-4hk6l deletion completed in 24.17106245s • [SLOW TEST:55.216 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:42:51.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-ebcbf40d-6705-11ea-9ccf-0242ac110012 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ebcbf40d-6705-11ea-9ccf-0242ac110012 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:44:14.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6c5lm" for this suite. Mar 15 21:44:36.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:44:36.584: INFO: namespace: e2e-tests-configmap-6c5lm, resource: bindings, ignored listing per whitelist Mar 15 21:44:36.633: INFO: namespace e2e-tests-configmap-6c5lm deletion completed in 22.194704479s • [SLOW TEST:105.063 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:44:36.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 15 21:44:37.199: INFO: Pod name wrapped-volume-race-2aab02e5-6706-11ea-9ccf-0242ac110012: Found 0 pods out of 5 Mar 15 21:44:42.207: INFO: Pod name wrapped-volume-race-2aab02e5-6706-11ea-9ccf-0242ac110012: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2aab02e5-6706-11ea-9ccf-0242ac110012 in namespace e2e-tests-emptydir-wrapper-tcscr, will wait for the garbage collector to delete the pods Mar 15 21:46:34.871: INFO: Deleting ReplicationController wrapped-volume-race-2aab02e5-6706-11ea-9ccf-0242ac110012 took: 14.728658ms Mar 15 21:46:35.271: INFO: Terminating ReplicationController wrapped-volume-race-2aab02e5-6706-11ea-9ccf-0242ac110012 pods took: 400.26686ms STEP: Creating RC which spawns configmap-volume pods Mar 15 21:47:21.502: INFO: Pod name wrapped-volume-race-8c9bfed6-6706-11ea-9ccf-0242ac110012: Found 0 pods out of 5 Mar 15 21:47:26.510: INFO: Pod name wrapped-volume-race-8c9bfed6-6706-11ea-9ccf-0242ac110012: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8c9bfed6-6706-11ea-9ccf-0242ac110012 in namespace e2e-tests-emptydir-wrapper-tcscr, will wait for the garbage collector to delete the pods Mar 15 21:49:32.591: INFO: Deleting ReplicationController wrapped-volume-race-8c9bfed6-6706-11ea-9ccf-0242ac110012 took: 7.730685ms Mar 15 21:49:32.691: INFO: Terminating ReplicationController wrapped-volume-race-8c9bfed6-6706-11ea-9ccf-0242ac110012 pods took: 100.235439ms STEP: Creating RC which spawns configmap-volume pods Mar 15 21:50:11.425: INFO: Pod name wrapped-volume-race-f1e3b61a-6706-11ea-9ccf-0242ac110012: Found 0 pods out of 5 Mar 15 21:50:16.433: INFO: Pod name wrapped-volume-race-f1e3b61a-6706-11ea-9ccf-0242ac110012: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f1e3b61a-6706-11ea-9ccf-0242ac110012 in namespace e2e-tests-emptydir-wrapper-tcscr, will wait for the garbage collector to delete the pods Mar 15 21:52:10.516: INFO: Deleting ReplicationController wrapped-volume-race-f1e3b61a-6706-11ea-9ccf-0242ac110012 took: 7.337423ms Mar 15 21:52:10.616: INFO: Terminating ReplicationController wrapped-volume-race-f1e3b61a-6706-11ea-9ccf-0242ac110012 pods took: 100.257068ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:52:55.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-tcscr" for this suite. Mar 15 21:53:05.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:53:05.363: INFO: namespace: e2e-tests-emptydir-wrapper-tcscr, resource: bindings, ignored listing per whitelist Mar 15 21:53:05.455: INFO: namespace e2e-tests-emptydir-wrapper-tcscr deletion completed in 10.131032386s • [SLOW TEST:508.822 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:53:05.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0315 21:53:45.728159 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 21:53:45.728: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:53:45.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-bkdgb" for this suite. Mar 15 21:53:57.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:53:57.920: INFO: namespace: e2e-tests-gc-bkdgb, resource: bindings, ignored listing per whitelist Mar 15 21:53:58.005: INFO: namespace e2e-tests-gc-bkdgb deletion completed in 12.274864632s • [SLOW TEST:52.550 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:53:58.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-790645fb-6707-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:53:58.169: INFO: Waiting up to 5m0s for pod "pod-configmaps-7907fd08-6707-11ea-9ccf-0242ac110012" in namespace "e2e-tests-configmap-79vvc" to be "success or failure" Mar 15 21:53:58.173: INFO: Pod "pod-configmaps-7907fd08-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 3.961068ms Mar 15 21:54:00.177: INFO: Pod "pod-configmaps-7907fd08-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007923584s Mar 15 21:54:02.183: INFO: Pod "pod-configmaps-7907fd08-6707-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014575471s STEP: Saw pod success Mar 15 21:54:02.184: INFO: Pod "pod-configmaps-7907fd08-6707-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:54:02.185: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-7907fd08-6707-11ea-9ccf-0242ac110012 container configmap-volume-test: STEP: delete the pod Mar 15 21:54:02.220: INFO: Waiting for pod pod-configmaps-7907fd08-6707-11ea-9ccf-0242ac110012 to disappear Mar 15 21:54:02.354: INFO: Pod pod-configmaps-7907fd08-6707-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:54:02.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-79vvc" for this suite. Mar 15 21:54:08.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:54:08.427: INFO: namespace: e2e-tests-configmap-79vvc, resource: bindings, ignored listing per whitelist Mar 15 21:54:08.498: INFO: namespace e2e-tests-configmap-79vvc deletion completed in 6.141159149s • [SLOW TEST:10.492 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:54:08.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 21:54:08.719: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f52a124-6707-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-v94kj" to be "success or failure" Mar 15 21:54:08.729: INFO: Pod "downwardapi-volume-7f52a124-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 9.736528ms Mar 15 21:54:10.732: INFO: Pod "downwardapi-volume-7f52a124-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013035472s Mar 15 21:54:12.736: INFO: Pod "downwardapi-volume-7f52a124-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016728068s Mar 15 21:54:14.888: INFO: Pod "downwardapi-volume-7f52a124-6707-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168787007s STEP: Saw pod success Mar 15 21:54:14.888: INFO: Pod "downwardapi-volume-7f52a124-6707-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:54:14.891: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7f52a124-6707-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 21:54:15.079: INFO: Waiting for pod downwardapi-volume-7f52a124-6707-11ea-9ccf-0242ac110012 to disappear Mar 15 21:54:15.091: INFO: Pod downwardapi-volume-7f52a124-6707-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:54:15.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v94kj" for this suite. Mar 15 21:54:21.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:54:21.166: INFO: namespace: e2e-tests-downward-api-v94kj, resource: bindings, ignored listing per whitelist Mar 15 21:54:21.184: INFO: namespace e2e-tests-downward-api-v94kj deletion completed in 6.090023099s • [SLOW TEST:12.686 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:54:21.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 15 21:54:21.272: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-a,UID:86d35f9e-6707-11ea-99e8-0242ac110002,ResourceVersion:34169,Generation:0,CreationTimestamp:2020-03-15 21:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 15 21:54:21.272: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-a,UID:86d35f9e-6707-11ea-99e8-0242ac110002,ResourceVersion:34169,Generation:0,CreationTimestamp:2020-03-15 21:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 15 21:54:31.279: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-a,UID:86d35f9e-6707-11ea-99e8-0242ac110002,ResourceVersion:34189,Generation:0,CreationTimestamp:2020-03-15 21:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 15 21:54:31.280: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-a,UID:86d35f9e-6707-11ea-99e8-0242ac110002,ResourceVersion:34189,Generation:0,CreationTimestamp:2020-03-15 21:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 15 21:54:41.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-a,UID:86d35f9e-6707-11ea-99e8-0242ac110002,ResourceVersion:34209,Generation:0,CreationTimestamp:2020-03-15 21:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 15 21:54:41.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-a,UID:86d35f9e-6707-11ea-99e8-0242ac110002,ResourceVersion:34209,Generation:0,CreationTimestamp:2020-03-15 21:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 15 21:54:51.294: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-a,UID:86d35f9e-6707-11ea-99e8-0242ac110002,ResourceVersion:34229,Generation:0,CreationTimestamp:2020-03-15 21:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 15 21:54:51.294: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-a,UID:86d35f9e-6707-11ea-99e8-0242ac110002,ResourceVersion:34229,Generation:0,CreationTimestamp:2020-03-15 21:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 15 21:55:01.301: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-b,UID:9eaf9f16-6707-11ea-99e8-0242ac110002,ResourceVersion:34249,Generation:0,CreationTimestamp:2020-03-15 21:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 15 21:55:01.302: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-b,UID:9eaf9f16-6707-11ea-99e8-0242ac110002,ResourceVersion:34249,Generation:0,CreationTimestamp:2020-03-15 21:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 15 21:55:11.308: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-b,UID:9eaf9f16-6707-11ea-99e8-0242ac110002,ResourceVersion:34269,Generation:0,CreationTimestamp:2020-03-15 21:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 15 21:55:11.308: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-npbs5,SelfLink:/api/v1/namespaces/e2e-tests-watch-npbs5/configmaps/e2e-watch-test-configmap-b,UID:9eaf9f16-6707-11ea-99e8-0242ac110002,ResourceVersion:34269,Generation:0,CreationTimestamp:2020-03-15 21:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:55:21.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-npbs5" for this suite. Mar 15 21:55:27.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:55:27.371: INFO: namespace: e2e-tests-watch-npbs5, resource: bindings, ignored listing per whitelist Mar 15 21:55:27.399: INFO: namespace e2e-tests-watch-npbs5 deletion completed in 6.085540625s • [SLOW TEST:66.214 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:55:27.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 21:55:27.495: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 15 21:55:32.500: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 15 21:55:32.500: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 15 21:55:32.531: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-68vvc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68vvc/deployments/test-cleanup-deployment,UID:b149f0ef-6707-11ea-99e8-0242ac110002,ResourceVersion:34333,Generation:1,CreationTimestamp:2020-03-15 21:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 15 21:55:32.580: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Mar 15 21:55:32.580: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 15 21:55:32.581: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-68vvc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68vvc/replicasets/test-cleanup-controller,UID:ae49671a-6707-11ea-99e8-0242ac110002,ResourceVersion:34334,Generation:1,CreationTimestamp:2020-03-15 21:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b149f0ef-6707-11ea-99e8-0242ac110002 0xc0028337e7 0xc0028337e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 15 21:55:32.638: INFO: Pod "test-cleanup-controller-bq48j" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-bq48j,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-68vvc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-68vvc/pods/test-cleanup-controller-bq48j,UID:ae4e44b4-6707-11ea-99e8-0242ac110002,ResourceVersion:34329,Generation:0,CreationTimestamp:2020-03-15 21:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller ae49671a-6707-11ea-99e8-0242ac110002 0xc0018be187 0xc0018be188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kx7pz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kx7pz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kx7pz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018be230} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018be250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:55:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:55:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:55:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:55:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.169,StartTime:2020-03-15 21:55:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 21:55:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b79fca37a45b9323f72de2d7301cbd93bf5bbba69839b82fdd3c31792272f30e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:55:32.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-68vvc" for this suite. Mar 15 21:55:40.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:55:40.975: INFO: namespace: e2e-tests-deployment-68vvc, resource: bindings, ignored listing per whitelist Mar 15 21:55:41.184: INFO: namespace e2e-tests-deployment-68vvc deletion completed in 8.524638299s • [SLOW TEST:13.784 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:55:41.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Mar 15 21:55:41.342: INFO: Waiting up to 5m0s for pod "var-expansion-b68b821c-6707-11ea-9ccf-0242ac110012" in namespace "e2e-tests-var-expansion-b56gr" to be "success or failure" Mar 15 21:55:41.360: INFO: Pod "var-expansion-b68b821c-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 17.840125ms Mar 15 21:55:43.482: INFO: Pod "var-expansion-b68b821c-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139760165s Mar 15 21:55:45.883: INFO: Pod "var-expansion-b68b821c-6707-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.540844275s STEP: Saw pod success Mar 15 21:55:45.883: INFO: Pod "var-expansion-b68b821c-6707-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:55:45.944: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-b68b821c-6707-11ea-9ccf-0242ac110012 container dapi-container: STEP: delete the pod Mar 15 21:55:46.130: INFO: Waiting for pod var-expansion-b68b821c-6707-11ea-9ccf-0242ac110012 to disappear Mar 15 21:55:46.147: INFO: Pod var-expansion-b68b821c-6707-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:55:46.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-b56gr" for this suite. Mar 15 21:55:52.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:55:52.276: INFO: namespace: e2e-tests-var-expansion-b56gr, resource: bindings, ignored listing per whitelist Mar 15 21:55:52.283: INFO: namespace e2e-tests-var-expansion-b56gr deletion completed in 6.131648838s • [SLOW TEST:11.099 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:55:52.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-bd25d40c-6707-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:55:52.446: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd2b8d0e-6707-11ea-9ccf-0242ac110012" in namespace "e2e-tests-configmap-gpbxl" to be "success or failure" Mar 15 21:55:52.454: INFO: Pod "pod-configmaps-bd2b8d0e-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 7.595503ms Mar 15 21:55:54.511: INFO: Pod "pod-configmaps-bd2b8d0e-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065550396s Mar 15 21:55:56.515: INFO: Pod "pod-configmaps-bd2b8d0e-6707-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069223168s STEP: Saw pod success Mar 15 21:55:56.515: INFO: Pod "pod-configmaps-bd2b8d0e-6707-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:55:56.518: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-bd2b8d0e-6707-11ea-9ccf-0242ac110012 container configmap-volume-test: STEP: delete the pod Mar 15 21:55:56.660: INFO: Waiting for pod pod-configmaps-bd2b8d0e-6707-11ea-9ccf-0242ac110012 to disappear Mar 15 21:55:56.787: INFO: Pod pod-configmaps-bd2b8d0e-6707-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:55:56.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gpbxl" for this suite. Mar 15 21:56:02.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:56:02.888: INFO: namespace: e2e-tests-configmap-gpbxl, resource: bindings, ignored listing per whitelist Mar 15 21:56:02.906: INFO: namespace e2e-tests-configmap-gpbxl deletion completed in 6.115232081s • [SLOW TEST:10.622 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:56:02.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 21:56:03.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 15 21:56:03.249: INFO: stderr: "" Mar 15 21:56:03.249: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:56:03.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-br8mc" for this suite. Mar 15 21:56:09.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:56:09.330: INFO: namespace: e2e-tests-kubectl-br8mc, resource: bindings, ignored listing per whitelist Mar 15 21:56:09.386: INFO: namespace e2e-tests-kubectl-br8mc deletion completed in 6.131995224s • [SLOW TEST:6.480 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:56:09.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-t4kmf STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 15 21:56:09.486: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 15 21:56:37.640: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.172 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-t4kmf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 21:56:37.640: INFO: >>> kubeConfig: /root/.kube/config I0315 21:56:37.680532 6 log.go:172] (0xc000a0b600) (0xc0020e6b40) Create stream I0315 21:56:37.680561 6 log.go:172] (0xc000a0b600) (0xc0020e6b40) Stream added, broadcasting: 1 I0315 21:56:37.684960 6 log.go:172] (0xc000a0b600) Reply frame received for 1 I0315 21:56:37.685098 6 log.go:172] (0xc000a0b600) (0xc0020e6be0) Create stream I0315 21:56:37.685220 6 log.go:172] (0xc000a0b600) (0xc0020e6be0) Stream added, broadcasting: 3 I0315 21:56:37.686371 6 log.go:172] (0xc000a0b600) Reply frame received for 3 I0315 21:56:37.686400 6 log.go:172] (0xc000a0b600) (0xc000c39680) Create stream I0315 21:56:37.686410 6 log.go:172] (0xc000a0b600) (0xc000c39680) Stream added, broadcasting: 5 I0315 21:56:37.687535 6 log.go:172] (0xc000a0b600) Reply frame received for 5 I0315 21:56:38.748824 6 log.go:172] (0xc000a0b600) Data frame received for 3 I0315 21:56:38.748875 6 log.go:172] (0xc0020e6be0) (3) Data frame handling I0315 21:56:38.748969 6 log.go:172] (0xc0020e6be0) (3) Data frame sent I0315 21:56:38.749860 6 log.go:172] (0xc000a0b600) Data frame received for 3 I0315 21:56:38.749906 6 log.go:172] (0xc0020e6be0) (3) Data frame handling I0315 21:56:38.749933 6 log.go:172] (0xc000a0b600) Data frame received for 5 I0315 21:56:38.749943 6 log.go:172] (0xc000c39680) (5) Data frame handling I0315 21:56:38.751695 6 log.go:172] (0xc000a0b600) Data frame received for 1 I0315 21:56:38.751717 6 log.go:172] (0xc0020e6b40) (1) Data frame handling I0315 21:56:38.751724 6 log.go:172] (0xc0020e6b40) (1) Data frame sent I0315 21:56:38.751732 6 log.go:172] (0xc000a0b600) (0xc0020e6b40) Stream removed, broadcasting: 1 I0315 21:56:38.751740 6 log.go:172] (0xc000a0b600) Go away received I0315 21:56:38.751905 6 log.go:172] (0xc000a0b600) (0xc0020e6b40) Stream removed, broadcasting: 1 I0315 21:56:38.751945 6 log.go:172] (0xc000a0b600) (0xc0020e6be0) Stream removed, broadcasting: 3 I0315 21:56:38.751981 6 log.go:172] (0xc000a0b600) (0xc000c39680) Stream removed, broadcasting: 5 Mar 15 21:56:38.752: INFO: Found all expected endpoints: [netserver-0] Mar 15 21:56:38.758: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.150 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-t4kmf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 21:56:38.758: INFO: >>> kubeConfig: /root/.kube/config I0315 21:56:38.791240 6 log.go:172] (0xc001222160) (0xc00099dea0) Create stream I0315 21:56:38.791265 6 log.go:172] (0xc001222160) (0xc00099dea0) Stream added, broadcasting: 1 I0315 21:56:38.794039 6 log.go:172] (0xc001222160) Reply frame received for 1 I0315 21:56:38.794092 6 log.go:172] (0xc001222160) (0xc0020e6c80) Create stream I0315 21:56:38.794110 6 log.go:172] (0xc001222160) (0xc0020e6c80) Stream added, broadcasting: 3 I0315 21:56:38.795070 6 log.go:172] (0xc001222160) Reply frame received for 3 I0315 21:56:38.795112 6 log.go:172] (0xc001222160) (0xc0020e6d20) Create stream I0315 21:56:38.795126 6 log.go:172] (0xc001222160) (0xc0020e6d20) Stream added, broadcasting: 5 I0315 21:56:38.796106 6 log.go:172] (0xc001222160) Reply frame received for 5 I0315 21:56:39.865348 6 log.go:172] (0xc001222160) Data frame received for 3 I0315 21:56:39.865395 6 log.go:172] (0xc0020e6c80) (3) Data frame handling I0315 21:56:39.865419 6 log.go:172] (0xc0020e6c80) (3) Data frame sent I0315 21:56:39.865441 6 log.go:172] (0xc001222160) Data frame received for 3 I0315 21:56:39.865461 6 log.go:172] (0xc0020e6c80) (3) Data frame handling I0315 21:56:39.865743 6 log.go:172] (0xc001222160) Data frame received for 5 I0315 21:56:39.865779 6 log.go:172] (0xc0020e6d20) (5) Data frame handling I0315 21:56:39.867436 6 log.go:172] (0xc001222160) Data frame received for 1 I0315 21:56:39.867468 6 log.go:172] (0xc00099dea0) (1) Data frame handling I0315 21:56:39.867511 6 log.go:172] (0xc00099dea0) (1) Data frame sent I0315 21:56:39.867539 6 log.go:172] (0xc001222160) (0xc00099dea0) Stream removed, broadcasting: 1 I0315 21:56:39.867673 6 log.go:172] (0xc001222160) (0xc00099dea0) Stream removed, broadcasting: 1 I0315 21:56:39.867692 6 log.go:172] (0xc001222160) (0xc0020e6c80) Stream removed, broadcasting: 3 I0315 21:56:39.867875 6 log.go:172] (0xc001222160) Go away received I0315 21:56:39.867938 6 log.go:172] (0xc001222160) (0xc0020e6d20) Stream removed, broadcasting: 5 Mar 15 21:56:39.867: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:56:39.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-t4kmf" for this suite. Mar 15 21:57:01.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:57:02.023: INFO: namespace: e2e-tests-pod-network-test-t4kmf, resource: bindings, ignored listing per whitelist Mar 15 21:57:02.025: INFO: namespace e2e-tests-pod-network-test-t4kmf deletion completed in 22.152730511s • [SLOW TEST:52.639 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:57:02.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 21:57:02.228: INFO: Creating deployment "test-recreate-deployment" Mar 15 21:57:02.291: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 15 21:57:02.296: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Mar 15 21:57:04.302: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 15 21:57:04.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719906222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719906222, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719906222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719906222, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 21:57:06.307: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 15 21:57:06.313: INFO: Updating deployment test-recreate-deployment Mar 15 21:57:06.313: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 15 21:57:06.836: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-52nmq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-52nmq/deployments/test-recreate-deployment,UID:e6c4b192-6707-11ea-99e8-0242ac110002,ResourceVersion:34725,Generation:2,CreationTimestamp:2020-03-15 21:57:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-15 21:57:06 +0000 UTC 2020-03-15 21:57:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-15 21:57:06 +0000 UTC 2020-03-15 21:57:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 15 21:57:06.906: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-52nmq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-52nmq/replicasets/test-recreate-deployment-589c4bfd,UID:e949aa20-6707-11ea-99e8-0242ac110002,ResourceVersion:34723,Generation:1,CreationTimestamp:2020-03-15 21:57:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e6c4b192-6707-11ea-99e8-0242ac110002 0xc0018be0ff 0xc0018be110}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 21:57:06.907: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 15 21:57:06.907: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-52nmq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-52nmq/replicasets/test-recreate-deployment-5bf7f65dc,UID:e6ceefc2-6707-11ea-99e8-0242ac110002,ResourceVersion:34714,Generation:2,CreationTimestamp:2020-03-15 21:57:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e6c4b192-6707-11ea-99e8-0242ac110002 0xc0018be200 0xc0018be201}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 21:57:06.911: INFO: Pod "test-recreate-deployment-589c4bfd-scqvk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-scqvk,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-52nmq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-52nmq/pods/test-recreate-deployment-589c4bfd-scqvk,UID:e94d4b29-6707-11ea-99e8-0242ac110002,ResourceVersion:34726,Generation:0,CreationTimestamp:2020-03-15 21:57:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd e949aa20-6707-11ea-99e8-0242ac110002 0xc0018bf16f 0xc0018bf180}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gr7d7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gr7d7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gr7d7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018bf1f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018bf210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:57:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:57:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:57:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 21:57:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 21:57:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:57:06.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-52nmq" for this suite. Mar 15 21:57:13.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:57:13.065: INFO: namespace: e2e-tests-deployment-52nmq, resource: bindings, ignored listing per whitelist Mar 15 21:57:13.121: INFO: namespace e2e-tests-deployment-52nmq deletion completed in 6.147117261s • [SLOW TEST:11.097 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:57:13.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 15 21:57:13.276: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-l7lsr,SelfLink:/api/v1/namespaces/e2e-tests-watch-l7lsr/configmaps/e2e-watch-test-label-changed,UID:ed519bfa-6707-11ea-99e8-0242ac110002,ResourceVersion:34772,Generation:0,CreationTimestamp:2020-03-15 21:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 15 21:57:13.277: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-l7lsr,SelfLink:/api/v1/namespaces/e2e-tests-watch-l7lsr/configmaps/e2e-watch-test-label-changed,UID:ed519bfa-6707-11ea-99e8-0242ac110002,ResourceVersion:34773,Generation:0,CreationTimestamp:2020-03-15 21:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 15 21:57:13.277: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-l7lsr,SelfLink:/api/v1/namespaces/e2e-tests-watch-l7lsr/configmaps/e2e-watch-test-label-changed,UID:ed519bfa-6707-11ea-99e8-0242ac110002,ResourceVersion:34774,Generation:0,CreationTimestamp:2020-03-15 21:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 15 21:57:23.307: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-l7lsr,SelfLink:/api/v1/namespaces/e2e-tests-watch-l7lsr/configmaps/e2e-watch-test-label-changed,UID:ed519bfa-6707-11ea-99e8-0242ac110002,ResourceVersion:34795,Generation:0,CreationTimestamp:2020-03-15 21:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 15 21:57:23.307: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-l7lsr,SelfLink:/api/v1/namespaces/e2e-tests-watch-l7lsr/configmaps/e2e-watch-test-label-changed,UID:ed519bfa-6707-11ea-99e8-0242ac110002,ResourceVersion:34796,Generation:0,CreationTimestamp:2020-03-15 21:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 15 21:57:23.307: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-l7lsr,SelfLink:/api/v1/namespaces/e2e-tests-watch-l7lsr/configmaps/e2e-watch-test-label-changed,UID:ed519bfa-6707-11ea-99e8-0242ac110002,ResourceVersion:34797,Generation:0,CreationTimestamp:2020-03-15 21:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:57:23.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-l7lsr" for this suite. Mar 15 21:57:29.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:57:29.414: INFO: namespace: e2e-tests-watch-l7lsr, resource: bindings, ignored listing per whitelist Mar 15 21:57:29.441: INFO: namespace e2e-tests-watch-l7lsr deletion completed in 6.129522641s • [SLOW TEST:16.320 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:57:29.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f70b15d3-6707-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume configMaps Mar 15 21:57:29.547: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f70bd4f6-6707-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-vsrqh" to be "success or failure" Mar 15 21:57:29.563: INFO: Pod "pod-projected-configmaps-f70bd4f6-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 15.601738ms Mar 15 21:57:31.570: INFO: Pod "pod-projected-configmaps-f70bd4f6-6707-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022846454s Mar 15 21:57:33.576: INFO: Pod "pod-projected-configmaps-f70bd4f6-6707-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028341234s STEP: Saw pod success Mar 15 21:57:33.576: INFO: Pod "pod-projected-configmaps-f70bd4f6-6707-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:57:33.579: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-f70bd4f6-6707-11ea-9ccf-0242ac110012 container projected-configmap-volume-test: STEP: delete the pod Mar 15 21:57:33.616: INFO: Waiting for pod pod-projected-configmaps-f70bd4f6-6707-11ea-9ccf-0242ac110012 to disappear Mar 15 21:57:33.618: INFO: Pod pod-projected-configmaps-f70bd4f6-6707-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:57:33.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vsrqh" for this suite. Mar 15 21:57:39.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:57:39.697: INFO: namespace: e2e-tests-projected-vsrqh, resource: bindings, ignored listing per whitelist Mar 15 21:57:39.712: INFO: namespace e2e-tests-projected-vsrqh deletion completed in 6.090334712s • [SLOW TEST:10.270 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:57:39.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 21:57:39.855: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 15 21:57:39.864: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:39.866: INFO: Number of nodes with available pods: 0 Mar 15 21:57:39.866: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:57:40.871: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:40.875: INFO: Number of nodes with available pods: 0 Mar 15 21:57:40.875: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:57:42.129: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:42.334: INFO: Number of nodes with available pods: 0 Mar 15 21:57:42.334: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:57:42.870: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:42.874: INFO: Number of nodes with available pods: 0 Mar 15 21:57:42.874: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:57:43.871: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:43.873: INFO: Number of nodes with available pods: 0 Mar 15 21:57:43.873: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:57:44.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:44.870: INFO: Number of nodes with available pods: 2 Mar 15 21:57:44.870: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 15 21:57:44.919: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:44.919: INFO: Wrong image for pod: daemon-set-gx74n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:45.017: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:46.022: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:46.022: INFO: Wrong image for pod: daemon-set-gx74n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:46.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:47.022: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:47.022: INFO: Wrong image for pod: daemon-set-gx74n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:47.025: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:48.022: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:48.022: INFO: Wrong image for pod: daemon-set-gx74n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:48.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:49.022: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:49.022: INFO: Wrong image for pod: daemon-set-gx74n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:49.022: INFO: Pod daemon-set-gx74n is not available Mar 15 21:57:49.025: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:50.021: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:50.021: INFO: Wrong image for pod: daemon-set-gx74n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:50.021: INFO: Pod daemon-set-gx74n is not available Mar 15 21:57:50.024: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:51.021: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:51.021: INFO: Wrong image for pod: daemon-set-gx74n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:51.021: INFO: Pod daemon-set-gx74n is not available Mar 15 21:57:51.024: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:52.422: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:52.422: INFO: Wrong image for pod: daemon-set-gx74n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:52.422: INFO: Pod daemon-set-gx74n is not available Mar 15 21:57:52.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:53.055: INFO: Pod daemon-set-4bzws is not available Mar 15 21:57:53.055: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:53.058: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:54.022: INFO: Pod daemon-set-4bzws is not available Mar 15 21:57:54.022: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:54.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:55.023: INFO: Pod daemon-set-4bzws is not available Mar 15 21:57:55.023: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:55.028: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:56.022: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:56.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:57.022: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:57.027: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:58.021: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:58.021: INFO: Pod daemon-set-6wnmc is not available Mar 15 21:57:58.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:57:59.022: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:57:59.022: INFO: Pod daemon-set-6wnmc is not available Mar 15 21:57:59.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:58:00.022: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:58:00.022: INFO: Pod daemon-set-6wnmc is not available Mar 15 21:58:00.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:58:01.021: INFO: Wrong image for pod: daemon-set-6wnmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 21:58:01.022: INFO: Pod daemon-set-6wnmc is not available Mar 15 21:58:01.025: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:58:02.022: INFO: Pod daemon-set-wc7pz is not available Mar 15 21:58:02.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 15 21:58:02.030: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:58:02.034: INFO: Number of nodes with available pods: 1 Mar 15 21:58:02.034: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:58:03.038: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:58:03.042: INFO: Number of nodes with available pods: 1 Mar 15 21:58:03.042: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:58:04.039: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:58:04.042: INFO: Number of nodes with available pods: 1 Mar 15 21:58:04.042: INFO: Node hunter-worker is running more than one daemon pod Mar 15 21:58:05.039: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 21:58:05.043: INFO: Number of nodes with available pods: 2 Mar 15 21:58:05.043: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-94597, will wait for the garbage collector to delete the pods Mar 15 21:58:05.129: INFO: Deleting DaemonSet.extensions daemon-set took: 10.168212ms Mar 15 21:58:05.229: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.207221ms Mar 15 21:58:11.832: INFO: Number of nodes with available pods: 0 Mar 15 21:58:11.832: INFO: Number of running nodes: 0, number of available pods: 0 Mar 15 21:58:11.835: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-94597/daemonsets","resourceVersion":"35001"},"items":null} Mar 15 21:58:11.839: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-94597/pods","resourceVersion":"35001"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:58:11.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-94597" for this suite. Mar 15 21:58:19.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:58:20.046: INFO: namespace: e2e-tests-daemonsets-94597, resource: bindings, ignored listing per whitelist Mar 15 21:58:20.073: INFO: namespace e2e-tests-daemonsets-94597 deletion completed in 8.223769057s • [SLOW TEST:40.361 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:58:20.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 15 21:58:28.435: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:28.586: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:30.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:30.590: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:32.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:32.590: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:34.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:34.590: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:36.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:36.590: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:38.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:38.590: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:40.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:40.590: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:42.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:42.590: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:44.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:44.589: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:46.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:46.590: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:48.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:48.590: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:50.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:50.652: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 21:58:52.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 21:58:52.590: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:58:52.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-f9rmh" for this suite. Mar 15 21:59:16.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:59:16.823: INFO: namespace: e2e-tests-container-lifecycle-hook-f9rmh, resource: bindings, ignored listing per whitelist Mar 15 21:59:16.834: INFO: namespace e2e-tests-container-lifecycle-hook-f9rmh deletion completed in 24.240216914s • [SLOW TEST:56.760 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:59:16.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-376e4b87-6708-11ea-9ccf-0242ac110012 STEP: Creating secret with name secret-projected-all-test-volume-376e4b49-6708-11ea-9ccf-0242ac110012 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 15 21:59:17.989: INFO: Waiting up to 5m0s for pod "projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-f75nx" to be "success or failure" Mar 15 21:59:18.320: INFO: Pod "projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 330.807318ms Mar 15 21:59:20.646: INFO: Pod "projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657170835s Mar 15 21:59:22.992: INFO: Pod "projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 5.003197248s Mar 15 21:59:24.999: INFO: Pod "projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 7.010022557s Mar 15 21:59:27.176: INFO: Pod "projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.186887949s STEP: Saw pod success Mar 15 21:59:27.176: INFO: Pod "projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 21:59:27.178: INFO: Trying to get logs from node hunter-worker pod projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012 container projected-all-volume-test: STEP: delete the pod Mar 15 21:59:27.426: INFO: Waiting for pod projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012 to disappear Mar 15 21:59:27.476: INFO: Pod projected-volume-376e4aaf-6708-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:59:27.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f75nx" for this suite. Mar 15 21:59:35.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:59:35.754: INFO: namespace: e2e-tests-projected-f75nx, resource: bindings, ignored listing per whitelist Mar 15 21:59:35.767: INFO: namespace e2e-tests-projected-f75nx deletion completed in 8.287100797s • [SLOW TEST:18.933 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:59:35.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0315 21:59:37.133093 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 21:59:37.133: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:59:37.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jwrx8" for this suite. Mar 15 21:59:43.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 21:59:43.220: INFO: namespace: e2e-tests-gc-jwrx8, resource: bindings, ignored listing per whitelist Mar 15 21:59:43.223: INFO: namespace e2e-tests-gc-jwrx8 deletion completed in 6.086366019s • [SLOW TEST:7.456 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 21:59:43.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 21:59:50.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-jc6g5" for this suite. Mar 15 22:00:12.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:00:12.522: INFO: namespace: e2e-tests-replication-controller-jc6g5, resource: bindings, ignored listing per whitelist Mar 15 22:00:12.535: INFO: namespace e2e-tests-replication-controller-jc6g5 deletion completed in 22.101133245s • [SLOW TEST:29.312 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:00:12.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 15 22:00:31.032: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:31.033: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:31.064875 6 log.go:172] (0xc001176370) (0xc001a75cc0) Create stream I0315 22:00:31.064917 6 log.go:172] (0xc001176370) (0xc001a75cc0) Stream added, broadcasting: 1 I0315 22:00:31.069507 6 log.go:172] (0xc001176370) Reply frame received for 1 I0315 22:00:31.069558 6 log.go:172] (0xc001176370) (0xc001c29220) Create stream I0315 22:00:31.069571 6 log.go:172] (0xc001176370) (0xc001c29220) Stream added, broadcasting: 3 I0315 22:00:31.073406 6 log.go:172] (0xc001176370) Reply frame received for 3 I0315 22:00:31.073464 6 log.go:172] (0xc001176370) (0xc0028ac0a0) Create stream I0315 22:00:31.073480 6 log.go:172] (0xc001176370) (0xc0028ac0a0) Stream added, broadcasting: 5 I0315 22:00:31.074329 6 log.go:172] (0xc001176370) Reply frame received for 5 I0315 22:00:31.131518 6 log.go:172] (0xc001176370) Data frame received for 5 I0315 22:00:31.131542 6 log.go:172] (0xc0028ac0a0) (5) Data frame handling I0315 22:00:31.131559 6 log.go:172] (0xc001176370) Data frame received for 3 I0315 22:00:31.131565 6 log.go:172] (0xc001c29220) (3) Data frame handling I0315 22:00:31.131571 6 log.go:172] (0xc001c29220) (3) Data frame sent I0315 22:00:31.131579 6 log.go:172] (0xc001176370) Data frame received for 3 I0315 22:00:31.131585 6 log.go:172] (0xc001c29220) (3) Data frame handling I0315 22:00:31.133069 6 log.go:172] (0xc001176370) Data frame received for 1 I0315 22:00:31.133085 6 log.go:172] (0xc001a75cc0) (1) Data frame handling I0315 22:00:31.133099 6 log.go:172] (0xc001a75cc0) (1) Data frame sent I0315 22:00:31.133247 6 log.go:172] (0xc001176370) (0xc001a75cc0) Stream removed, broadcasting: 1 I0315 22:00:31.133259 6 log.go:172] (0xc001176370) Go away received I0315 22:00:31.133333 6 log.go:172] (0xc001176370) (0xc001a75cc0) Stream removed, broadcasting: 1 I0315 22:00:31.133348 6 log.go:172] (0xc001176370) (0xc001c29220) Stream removed, broadcasting: 3 I0315 22:00:31.133359 6 log.go:172] (0xc001176370) (0xc0028ac0a0) Stream removed, broadcasting: 5 Mar 15 22:00:31.133: INFO: Exec stderr: "" Mar 15 22:00:31.133: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:31.133: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:31.229778 6 log.go:172] (0xc0014862c0) (0xc0028ac3c0) Create stream I0315 22:00:31.229811 6 log.go:172] (0xc0014862c0) (0xc0028ac3c0) Stream added, broadcasting: 1 I0315 22:00:31.231555 6 log.go:172] (0xc0014862c0) Reply frame received for 1 I0315 22:00:31.231605 6 log.go:172] (0xc0014862c0) (0xc001608000) Create stream I0315 22:00:31.231621 6 log.go:172] (0xc0014862c0) (0xc001608000) Stream added, broadcasting: 3 I0315 22:00:31.232599 6 log.go:172] (0xc0014862c0) Reply frame received for 3 I0315 22:00:31.232641 6 log.go:172] (0xc0014862c0) (0xc001c29360) Create stream I0315 22:00:31.232657 6 log.go:172] (0xc0014862c0) (0xc001c29360) Stream added, broadcasting: 5 I0315 22:00:31.233777 6 log.go:172] (0xc0014862c0) Reply frame received for 5 I0315 22:00:31.304385 6 log.go:172] (0xc0014862c0) Data frame received for 3 I0315 22:00:31.304414 6 log.go:172] (0xc001608000) (3) Data frame handling I0315 22:00:31.304448 6 log.go:172] (0xc0014862c0) Data frame received for 5 I0315 22:00:31.304512 6 log.go:172] (0xc001c29360) (5) Data frame handling I0315 22:00:31.304547 6 log.go:172] (0xc001608000) (3) Data frame sent I0315 22:00:31.304565 6 log.go:172] (0xc0014862c0) Data frame received for 3 I0315 22:00:31.304579 6 log.go:172] (0xc001608000) (3) Data frame handling I0315 22:00:31.306079 6 log.go:172] (0xc0014862c0) Data frame received for 1 I0315 22:00:31.306095 6 log.go:172] (0xc0028ac3c0) (1) Data frame handling I0315 22:00:31.306106 6 log.go:172] (0xc0028ac3c0) (1) Data frame sent I0315 22:00:31.306114 6 log.go:172] (0xc0014862c0) (0xc0028ac3c0) Stream removed, broadcasting: 1 I0315 22:00:31.306284 6 log.go:172] (0xc0014862c0) (0xc0028ac3c0) Stream removed, broadcasting: 1 I0315 22:00:31.306380 6 log.go:172] (0xc0014862c0) (0xc001608000) Stream removed, broadcasting: 3 I0315 22:00:31.306404 6 log.go:172] (0xc0014862c0) (0xc001c29360) Stream removed, broadcasting: 5 Mar 15 22:00:31.306: INFO: Exec stderr: "" I0315 22:00:31.306474 6 log.go:172] (0xc0014862c0) Go away received Mar 15 22:00:31.306: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:31.306: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:31.373648 6 log.go:172] (0xc0016bc160) (0xc001c29860) Create stream I0315 22:00:31.373683 6 log.go:172] (0xc0016bc160) (0xc001c29860) Stream added, broadcasting: 1 I0315 22:00:31.376522 6 log.go:172] (0xc0016bc160) Reply frame received for 1 I0315 22:00:31.376557 6 log.go:172] (0xc0016bc160) (0xc001a75d60) Create stream I0315 22:00:31.376566 6 log.go:172] (0xc0016bc160) (0xc001a75d60) Stream added, broadcasting: 3 I0315 22:00:31.377641 6 log.go:172] (0xc0016bc160) Reply frame received for 3 I0315 22:00:31.377681 6 log.go:172] (0xc0016bc160) (0xc00283cfa0) Create stream I0315 22:00:31.377690 6 log.go:172] (0xc0016bc160) (0xc00283cfa0) Stream added, broadcasting: 5 I0315 22:00:31.378612 6 log.go:172] (0xc0016bc160) Reply frame received for 5 I0315 22:00:31.435836 6 log.go:172] (0xc0016bc160) Data frame received for 3 I0315 22:00:31.435890 6 log.go:172] (0xc001a75d60) (3) Data frame handling I0315 22:00:31.435910 6 log.go:172] (0xc001a75d60) (3) Data frame sent I0315 22:00:31.435925 6 log.go:172] (0xc0016bc160) Data frame received for 3 I0315 22:00:31.435940 6 log.go:172] (0xc001a75d60) (3) Data frame handling I0315 22:00:31.435977 6 log.go:172] (0xc0016bc160) Data frame received for 5 I0315 22:00:31.436006 6 log.go:172] (0xc00283cfa0) (5) Data frame handling I0315 22:00:31.437711 6 log.go:172] (0xc0016bc160) Data frame received for 1 I0315 22:00:31.437754 6 log.go:172] (0xc001c29860) (1) Data frame handling I0315 22:00:31.437794 6 log.go:172] (0xc001c29860) (1) Data frame sent I0315 22:00:31.437817 6 log.go:172] (0xc0016bc160) (0xc001c29860) Stream removed, broadcasting: 1 I0315 22:00:31.437841 6 log.go:172] (0xc0016bc160) Go away received I0315 22:00:31.438032 6 log.go:172] (0xc0016bc160) (0xc001c29860) Stream removed, broadcasting: 1 I0315 22:00:31.438067 6 log.go:172] (0xc0016bc160) (0xc001a75d60) Stream removed, broadcasting: 3 I0315 22:00:31.438087 6 log.go:172] (0xc0016bc160) (0xc00283cfa0) Stream removed, broadcasting: 5 Mar 15 22:00:31.438: INFO: Exec stderr: "" Mar 15 22:00:31.438: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:31.438: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:31.469041 6 log.go:172] (0xc001176840) (0xc0019ae000) Create stream I0315 22:00:31.469065 6 log.go:172] (0xc001176840) (0xc0019ae000) Stream added, broadcasting: 1 I0315 22:00:31.471491 6 log.go:172] (0xc001176840) Reply frame received for 1 I0315 22:00:31.471542 6 log.go:172] (0xc001176840) (0xc0019ae0a0) Create stream I0315 22:00:31.471560 6 log.go:172] (0xc001176840) (0xc0019ae0a0) Stream added, broadcasting: 3 I0315 22:00:31.472397 6 log.go:172] (0xc001176840) Reply frame received for 3 I0315 22:00:31.472423 6 log.go:172] (0xc001176840) (0xc00283d040) Create stream I0315 22:00:31.472436 6 log.go:172] (0xc001176840) (0xc00283d040) Stream added, broadcasting: 5 I0315 22:00:31.473627 6 log.go:172] (0xc001176840) Reply frame received for 5 I0315 22:00:31.533280 6 log.go:172] (0xc001176840) Data frame received for 3 I0315 22:00:31.533325 6 log.go:172] (0xc001176840) Data frame received for 5 I0315 22:00:31.533391 6 log.go:172] (0xc00283d040) (5) Data frame handling I0315 22:00:31.533429 6 log.go:172] (0xc0019ae0a0) (3) Data frame handling I0315 22:00:31.533453 6 log.go:172] (0xc0019ae0a0) (3) Data frame sent I0315 22:00:31.533472 6 log.go:172] (0xc001176840) Data frame received for 3 I0315 22:00:31.533491 6 log.go:172] (0xc0019ae0a0) (3) Data frame handling I0315 22:00:31.534515 6 log.go:172] (0xc001176840) Data frame received for 1 I0315 22:00:31.534548 6 log.go:172] (0xc0019ae000) (1) Data frame handling I0315 22:00:31.534576 6 log.go:172] (0xc0019ae000) (1) Data frame sent I0315 22:00:31.534600 6 log.go:172] (0xc001176840) (0xc0019ae000) Stream removed, broadcasting: 1 I0315 22:00:31.534630 6 log.go:172] (0xc001176840) Go away received I0315 22:00:31.534737 6 log.go:172] (0xc001176840) (0xc0019ae000) Stream removed, broadcasting: 1 I0315 22:00:31.534764 6 log.go:172] (0xc001176840) (0xc0019ae0a0) Stream removed, broadcasting: 3 I0315 22:00:31.534780 6 log.go:172] (0xc001176840) (0xc00283d040) Stream removed, broadcasting: 5 Mar 15 22:00:31.534: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 15 22:00:31.534: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:31.534: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:31.561018 6 log.go:172] (0xc0016bc630) (0xc001c29d60) Create stream I0315 22:00:31.561058 6 log.go:172] (0xc0016bc630) (0xc001c29d60) Stream added, broadcasting: 1 I0315 22:00:31.567218 6 log.go:172] (0xc0016bc630) Reply frame received for 1 I0315 22:00:31.567286 6 log.go:172] (0xc0016bc630) (0xc001c29e00) Create stream I0315 22:00:31.567300 6 log.go:172] (0xc0016bc630) (0xc001c29e00) Stream added, broadcasting: 3 I0315 22:00:31.568140 6 log.go:172] (0xc0016bc630) Reply frame received for 3 I0315 22:00:31.568180 6 log.go:172] (0xc0016bc630) (0xc001c29ea0) Create stream I0315 22:00:31.568190 6 log.go:172] (0xc0016bc630) (0xc001c29ea0) Stream added, broadcasting: 5 I0315 22:00:31.568941 6 log.go:172] (0xc0016bc630) Reply frame received for 5 I0315 22:00:31.626430 6 log.go:172] (0xc0016bc630) Data frame received for 3 I0315 22:00:31.626488 6 log.go:172] (0xc001c29e00) (3) Data frame handling I0315 22:00:31.626514 6 log.go:172] (0xc001c29e00) (3) Data frame sent I0315 22:00:31.626531 6 log.go:172] (0xc0016bc630) Data frame received for 3 I0315 22:00:31.626547 6 log.go:172] (0xc001c29e00) (3) Data frame handling I0315 22:00:31.626582 6 log.go:172] (0xc0016bc630) Data frame received for 5 I0315 22:00:31.626625 6 log.go:172] (0xc001c29ea0) (5) Data frame handling I0315 22:00:31.627777 6 log.go:172] (0xc0016bc630) Data frame received for 1 I0315 22:00:31.627806 6 log.go:172] (0xc001c29d60) (1) Data frame handling I0315 22:00:31.627844 6 log.go:172] (0xc001c29d60) (1) Data frame sent I0315 22:00:31.627872 6 log.go:172] (0xc0016bc630) (0xc001c29d60) Stream removed, broadcasting: 1 I0315 22:00:31.627912 6 log.go:172] (0xc0016bc630) Go away received I0315 22:00:31.627964 6 log.go:172] (0xc0016bc630) (0xc001c29d60) Stream removed, broadcasting: 1 I0315 22:00:31.627980 6 log.go:172] (0xc0016bc630) (0xc001c29e00) Stream removed, broadcasting: 3 I0315 22:00:31.627995 6 log.go:172] (0xc0016bc630) (0xc001c29ea0) Stream removed, broadcasting: 5 Mar 15 22:00:31.628: INFO: Exec stderr: "" Mar 15 22:00:31.628: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:31.628: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:31.662941 6 log.go:172] (0xc0016bcb00) (0xc0020ce1e0) Create stream I0315 22:00:31.662990 6 log.go:172] (0xc0016bcb00) (0xc0020ce1e0) Stream added, broadcasting: 1 I0315 22:00:31.665700 6 log.go:172] (0xc0016bcb00) Reply frame received for 1 I0315 22:00:31.665756 6 log.go:172] (0xc0016bcb00) (0xc0020ce320) Create stream I0315 22:00:31.665772 6 log.go:172] (0xc0016bcb00) (0xc0020ce320) Stream added, broadcasting: 3 I0315 22:00:31.666572 6 log.go:172] (0xc0016bcb00) Reply frame received for 3 I0315 22:00:31.666599 6 log.go:172] (0xc0016bcb00) (0xc0028ac460) Create stream I0315 22:00:31.666611 6 log.go:172] (0xc0016bcb00) (0xc0028ac460) Stream added, broadcasting: 5 I0315 22:00:31.667294 6 log.go:172] (0xc0016bcb00) Reply frame received for 5 I0315 22:00:31.741689 6 log.go:172] (0xc0016bcb00) Data frame received for 3 I0315 22:00:31.741720 6 log.go:172] (0xc0020ce320) (3) Data frame handling I0315 22:00:31.741734 6 log.go:172] (0xc0020ce320) (3) Data frame sent I0315 22:00:31.741772 6 log.go:172] (0xc0016bcb00) Data frame received for 5 I0315 22:00:31.741798 6 log.go:172] (0xc0028ac460) (5) Data frame handling I0315 22:00:31.742005 6 log.go:172] (0xc0016bcb00) Data frame received for 3 I0315 22:00:31.742036 6 log.go:172] (0xc0020ce320) (3) Data frame handling I0315 22:00:31.743243 6 log.go:172] (0xc0016bcb00) Data frame received for 1 I0315 22:00:31.743272 6 log.go:172] (0xc0020ce1e0) (1) Data frame handling I0315 22:00:31.743323 6 log.go:172] (0xc0020ce1e0) (1) Data frame sent I0315 22:00:31.743345 6 log.go:172] (0xc0016bcb00) (0xc0020ce1e0) Stream removed, broadcasting: 1 I0315 22:00:31.743450 6 log.go:172] (0xc0016bcb00) (0xc0020ce1e0) Stream removed, broadcasting: 1 I0315 22:00:31.743495 6 log.go:172] (0xc0016bcb00) (0xc0020ce320) Stream removed, broadcasting: 3 I0315 22:00:31.743674 6 log.go:172] (0xc0016bcb00) Go away received I0315 22:00:31.743711 6 log.go:172] (0xc0016bcb00) (0xc0028ac460) Stream removed, broadcasting: 5 Mar 15 22:00:31.743: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 15 22:00:31.743: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:31.743: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:31.777513 6 log.go:172] (0xc001486790) (0xc0028ac6e0) Create stream I0315 22:00:31.777563 6 log.go:172] (0xc001486790) (0xc0028ac6e0) Stream added, broadcasting: 1 I0315 22:00:31.779354 6 log.go:172] (0xc001486790) Reply frame received for 1 I0315 22:00:31.779390 6 log.go:172] (0xc001486790) (0xc00283d0e0) Create stream I0315 22:00:31.779398 6 log.go:172] (0xc001486790) (0xc00283d0e0) Stream added, broadcasting: 3 I0315 22:00:31.780167 6 log.go:172] (0xc001486790) Reply frame received for 3 I0315 22:00:31.780190 6 log.go:172] (0xc001486790) (0xc00283d180) Create stream I0315 22:00:31.780200 6 log.go:172] (0xc001486790) (0xc00283d180) Stream added, broadcasting: 5 I0315 22:00:31.780931 6 log.go:172] (0xc001486790) Reply frame received for 5 I0315 22:00:31.831997 6 log.go:172] (0xc001486790) Data frame received for 3 I0315 22:00:31.832037 6 log.go:172] (0xc00283d0e0) (3) Data frame handling I0315 22:00:31.832049 6 log.go:172] (0xc00283d0e0) (3) Data frame sent I0315 22:00:31.832063 6 log.go:172] (0xc001486790) Data frame received for 3 I0315 22:00:31.832072 6 log.go:172] (0xc00283d0e0) (3) Data frame handling I0315 22:00:31.832096 6 log.go:172] (0xc001486790) Data frame received for 5 I0315 22:00:31.832107 6 log.go:172] (0xc00283d180) (5) Data frame handling I0315 22:00:31.833228 6 log.go:172] (0xc001486790) Data frame received for 1 I0315 22:00:31.833252 6 log.go:172] (0xc0028ac6e0) (1) Data frame handling I0315 22:00:31.833269 6 log.go:172] (0xc0028ac6e0) (1) Data frame sent I0315 22:00:31.833285 6 log.go:172] (0xc001486790) (0xc0028ac6e0) Stream removed, broadcasting: 1 I0315 22:00:31.833363 6 log.go:172] (0xc001486790) (0xc0028ac6e0) Stream removed, broadcasting: 1 I0315 22:00:31.833383 6 log.go:172] (0xc001486790) (0xc00283d0e0) Stream removed, broadcasting: 3 I0315 22:00:31.833500 6 log.go:172] (0xc001486790) Go away received I0315 22:00:31.833560 6 log.go:172] (0xc001486790) (0xc00283d180) Stream removed, broadcasting: 5 Mar 15 22:00:31.833: INFO: Exec stderr: "" Mar 15 22:00:31.833: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:31.833: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:31.859508 6 log.go:172] (0xc001486b00) (0xc0028ac820) Create stream I0315 22:00:31.859528 6 log.go:172] (0xc001486b00) (0xc0028ac820) Stream added, broadcasting: 1 I0315 22:00:31.866304 6 log.go:172] (0xc001486b00) Reply frame received for 1 I0315 22:00:31.866345 6 log.go:172] (0xc001486b00) (0xc001608000) Create stream I0315 22:00:31.866361 6 log.go:172] (0xc001486b00) (0xc001608000) Stream added, broadcasting: 3 I0315 22:00:31.867317 6 log.go:172] (0xc001486b00) Reply frame received for 3 I0315 22:00:31.867359 6 log.go:172] (0xc001486b00) (0xc002216000) Create stream I0315 22:00:31.867371 6 log.go:172] (0xc001486b00) (0xc002216000) Stream added, broadcasting: 5 I0315 22:00:31.868246 6 log.go:172] (0xc001486b00) Reply frame received for 5 I0315 22:00:31.935988 6 log.go:172] (0xc001486b00) Data frame received for 5 I0315 22:00:31.936038 6 log.go:172] (0xc002216000) (5) Data frame handling I0315 22:00:31.936071 6 log.go:172] (0xc001486b00) Data frame received for 3 I0315 22:00:31.936085 6 log.go:172] (0xc001608000) (3) Data frame handling I0315 22:00:31.936109 6 log.go:172] (0xc001608000) (3) Data frame sent I0315 22:00:31.936128 6 log.go:172] (0xc001486b00) Data frame received for 3 I0315 22:00:31.936138 6 log.go:172] (0xc001608000) (3) Data frame handling I0315 22:00:31.937042 6 log.go:172] (0xc001486b00) Data frame received for 1 I0315 22:00:31.937062 6 log.go:172] (0xc0028ac820) (1) Data frame handling I0315 22:00:31.937074 6 log.go:172] (0xc0028ac820) (1) Data frame sent I0315 22:00:31.937521 6 log.go:172] (0xc001486b00) (0xc0028ac820) Stream removed, broadcasting: 1 I0315 22:00:31.937563 6 log.go:172] (0xc001486b00) Go away received I0315 22:00:31.937657 6 log.go:172] (0xc001486b00) (0xc0028ac820) Stream removed, broadcasting: 1 I0315 22:00:31.937678 6 log.go:172] (0xc001486b00) (0xc001608000) Stream removed, broadcasting: 3 I0315 22:00:31.937689 6 log.go:172] (0xc001486b00) (0xc002216000) Stream removed, broadcasting: 5 Mar 15 22:00:31.937: INFO: Exec stderr: "" Mar 15 22:00:31.937: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:31.937: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:31.969240 6 log.go:172] (0xc000a0b600) (0xc001670280) Create stream I0315 22:00:31.969287 6 log.go:172] (0xc000a0b600) (0xc001670280) Stream added, broadcasting: 1 I0315 22:00:31.971156 6 log.go:172] (0xc000a0b600) Reply frame received for 1 I0315 22:00:31.971201 6 log.go:172] (0xc000a0b600) (0xc0016080a0) Create stream I0315 22:00:31.971223 6 log.go:172] (0xc000a0b600) (0xc0016080a0) Stream added, broadcasting: 3 I0315 22:00:31.972209 6 log.go:172] (0xc000a0b600) Reply frame received for 3 I0315 22:00:31.972268 6 log.go:172] (0xc000a0b600) (0xc001608140) Create stream I0315 22:00:31.972282 6 log.go:172] (0xc000a0b600) (0xc001608140) Stream added, broadcasting: 5 I0315 22:00:31.973683 6 log.go:172] (0xc000a0b600) Reply frame received for 5 I0315 22:00:32.037899 6 log.go:172] (0xc000a0b600) Data frame received for 3 I0315 22:00:32.037946 6 log.go:172] (0xc0016080a0) (3) Data frame handling I0315 22:00:32.037968 6 log.go:172] (0xc0016080a0) (3) Data frame sent I0315 22:00:32.037997 6 log.go:172] (0xc000a0b600) Data frame received for 3 I0315 22:00:32.038012 6 log.go:172] (0xc0016080a0) (3) Data frame handling I0315 22:00:32.038349 6 log.go:172] (0xc000a0b600) Data frame received for 5 I0315 22:00:32.038442 6 log.go:172] (0xc001608140) (5) Data frame handling I0315 22:00:32.043010 6 log.go:172] (0xc000a0b600) Data frame received for 1 I0315 22:00:32.043033 6 log.go:172] (0xc001670280) (1) Data frame handling I0315 22:00:32.043046 6 log.go:172] (0xc001670280) (1) Data frame sent I0315 22:00:32.043066 6 log.go:172] (0xc000a0b600) (0xc001670280) Stream removed, broadcasting: 1 I0315 22:00:32.043093 6 log.go:172] (0xc000a0b600) Go away received I0315 22:00:32.043284 6 log.go:172] (0xc000a0b600) (0xc001670280) Stream removed, broadcasting: 1 I0315 22:00:32.043316 6 log.go:172] (0xc000a0b600) (0xc0016080a0) Stream removed, broadcasting: 3 I0315 22:00:32.043342 6 log.go:172] (0xc000a0b600) (0xc001608140) Stream removed, broadcasting: 5 Mar 15 22:00:32.043: INFO: Exec stderr: "" Mar 15 22:00:32.043: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zxm6d PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 22:00:32.043: INFO: >>> kubeConfig: /root/.kube/config I0315 22:00:32.075305 6 log.go:172] (0xc001176370) (0xc001f4a1e0) Create stream I0315 22:00:32.075356 6 log.go:172] (0xc001176370) (0xc001f4a1e0) Stream added, broadcasting: 1 I0315 22:00:32.077272 6 log.go:172] (0xc001176370) Reply frame received for 1 I0315 22:00:32.077301 6 log.go:172] (0xc001176370) (0xc001670320) Create stream I0315 22:00:32.077311 6 log.go:172] (0xc001176370) (0xc001670320) Stream added, broadcasting: 3 I0315 22:00:32.078455 6 log.go:172] (0xc001176370) Reply frame received for 3 I0315 22:00:32.078511 6 log.go:172] (0xc001176370) (0xc001670460) Create stream I0315 22:00:32.078535 6 log.go:172] (0xc001176370) (0xc001670460) Stream added, broadcasting: 5 I0315 22:00:32.079417 6 log.go:172] (0xc001176370) Reply frame received for 5 I0315 22:00:32.135077 6 log.go:172] (0xc001176370) Data frame received for 5 I0315 22:00:32.135112 6 log.go:172] (0xc001670460) (5) Data frame handling I0315 22:00:32.135145 6 log.go:172] (0xc001176370) Data frame received for 3 I0315 22:00:32.135159 6 log.go:172] (0xc001670320) (3) Data frame handling I0315 22:00:32.135174 6 log.go:172] (0xc001670320) (3) Data frame sent I0315 22:00:32.135189 6 log.go:172] (0xc001176370) Data frame received for 3 I0315 22:00:32.135200 6 log.go:172] (0xc001670320) (3) Data frame handling I0315 22:00:32.136750 6 log.go:172] (0xc001176370) Data frame received for 1 I0315 22:00:32.136774 6 log.go:172] (0xc001f4a1e0) (1) Data frame handling I0315 22:00:32.136782 6 log.go:172] (0xc001f4a1e0) (1) Data frame sent I0315 22:00:32.136792 6 log.go:172] (0xc001176370) (0xc001f4a1e0) Stream removed, broadcasting: 1 I0315 22:00:32.136828 6 log.go:172] (0xc001176370) Go away received I0315 22:00:32.136908 6 log.go:172] (0xc001176370) (0xc001f4a1e0) Stream removed, broadcasting: 1 I0315 22:00:32.136935 6 log.go:172] (0xc001176370) (0xc001670320) Stream removed, broadcasting: 3 I0315 22:00:32.136952 6 log.go:172] (0xc001176370) (0xc001670460) Stream removed, broadcasting: 5 Mar 15 22:00:32.136: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:00:32.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-zxm6d" for this suite. Mar 15 22:01:12.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:01:12.253: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-zxm6d, resource: bindings, ignored listing per whitelist Mar 15 22:01:12.264: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-zxm6d deletion completed in 40.115806149s • [SLOW TEST:59.729 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:01:12.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 22:01:12.390: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bdf0e84-6708-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-knk5j" to be "success or failure" Mar 15 22:01:12.394: INFO: Pod "downwardapi-volume-7bdf0e84-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214097ms Mar 15 22:01:14.398: INFO: Pod "downwardapi-volume-7bdf0e84-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007650717s Mar 15 22:01:16.401: INFO: Pod "downwardapi-volume-7bdf0e84-6708-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010912317s STEP: Saw pod success Mar 15 22:01:16.401: INFO: Pod "downwardapi-volume-7bdf0e84-6708-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 22:01:16.403: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7bdf0e84-6708-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 22:01:16.420: INFO: Waiting for pod downwardapi-volume-7bdf0e84-6708-11ea-9ccf-0242ac110012 to disappear Mar 15 22:01:16.424: INFO: Pod downwardapi-volume-7bdf0e84-6708-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:01:16.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-knk5j" for this suite. Mar 15 22:01:22.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:01:22.633: INFO: namespace: e2e-tests-downward-api-knk5j, resource: bindings, ignored listing per whitelist Mar 15 22:01:22.705: INFO: namespace e2e-tests-downward-api-knk5j deletion completed in 6.277307281s • [SLOW TEST:10.441 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:01:22.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-8242be1c-6708-11ea-9ccf-0242ac110012 STEP: Creating a pod to test consume secrets Mar 15 22:01:23.289: INFO: Waiting up to 5m0s for pod "pod-secrets-825e2156-6708-11ea-9ccf-0242ac110012" in namespace "e2e-tests-secrets-69jjl" to be "success or failure" Mar 15 22:01:23.311: INFO: Pod "pod-secrets-825e2156-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 22.243251ms Mar 15 22:01:25.314: INFO: Pod "pod-secrets-825e2156-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025198024s Mar 15 22:01:27.318: INFO: Pod "pod-secrets-825e2156-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02899558s Mar 15 22:01:29.322: INFO: Pod "pod-secrets-825e2156-6708-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032878924s STEP: Saw pod success Mar 15 22:01:29.322: INFO: Pod "pod-secrets-825e2156-6708-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 22:01:29.324: INFO: Trying to get logs from node hunter-worker pod pod-secrets-825e2156-6708-11ea-9ccf-0242ac110012 container secret-volume-test: STEP: delete the pod Mar 15 22:01:29.390: INFO: Waiting for pod pod-secrets-825e2156-6708-11ea-9ccf-0242ac110012 to disappear Mar 15 22:01:29.393: INFO: Pod pod-secrets-825e2156-6708-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:01:29.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-69jjl" for this suite. Mar 15 22:01:37.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:01:38.093: INFO: namespace: e2e-tests-secrets-69jjl, resource: bindings, ignored listing per whitelist Mar 15 22:01:38.150: INFO: namespace e2e-tests-secrets-69jjl deletion completed in 8.754113753s • [SLOW TEST:15.444 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:01:38.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 22:01:38.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b4e48ca-6708-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-d9c9q" to be "success or failure" Mar 15 22:01:38.334: INFO: Pod "downwardapi-volume-8b4e48ca-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 37.40836ms Mar 15 22:01:40.337: INFO: Pod "downwardapi-volume-8b4e48ca-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040462373s Mar 15 22:01:43.199: INFO: Pod "downwardapi-volume-8b4e48ca-6708-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.902188139s STEP: Saw pod success Mar 15 22:01:43.199: INFO: Pod "downwardapi-volume-8b4e48ca-6708-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 22:01:43.415: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8b4e48ca-6708-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 22:01:43.487: INFO: Waiting for pod downwardapi-volume-8b4e48ca-6708-11ea-9ccf-0242ac110012 to disappear Mar 15 22:01:43.509: INFO: Pod downwardapi-volume-8b4e48ca-6708-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:01:43.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d9c9q" for this suite. Mar 15 22:01:49.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:01:49.660: INFO: namespace: e2e-tests-projected-d9c9q, resource: bindings, ignored listing per whitelist Mar 15 22:01:49.709: INFO: namespace e2e-tests-projected-d9c9q deletion completed in 6.196332292s • [SLOW TEST:11.559 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:01:49.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 22:01:49.943: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"923817fb-6708-11ea-99e8-0242ac110002", Controller:(*bool)(0xc000cf4706), BlockOwnerDeletion:(*bool)(0xc000cf4707)}} Mar 15 22:01:49.958: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"922f6ee1-6708-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0020824a2), BlockOwnerDeletion:(*bool)(0xc0020824a3)}} Mar 15 22:01:50.013: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"922fe7e3-6708-11ea-99e8-0242ac110002", Controller:(*bool)(0xc000cf497e), BlockOwnerDeletion:(*bool)(0xc000cf497f)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:01:55.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-t4c27" for this suite. Mar 15 22:02:01.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:02:01.458: INFO: namespace: e2e-tests-gc-t4c27, resource: bindings, ignored listing per whitelist Mar 15 22:02:01.474: INFO: namespace e2e-tests-gc-t4c27 deletion completed in 6.31942891s • [SLOW TEST:11.764 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:02:01.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 22:02:01.585: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9931bb1a-6708-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-xs8c4" to be "success or failure" Mar 15 22:02:01.594: INFO: Pod "downwardapi-volume-9931bb1a-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 8.667193ms Mar 15 22:02:03.655: INFO: Pod "downwardapi-volume-9931bb1a-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069339745s Mar 15 22:02:05.660: INFO: Pod "downwardapi-volume-9931bb1a-6708-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074859113s STEP: Saw pod success Mar 15 22:02:05.660: INFO: Pod "downwardapi-volume-9931bb1a-6708-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 22:02:05.662: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9931bb1a-6708-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 22:02:05.695: INFO: Waiting for pod downwardapi-volume-9931bb1a-6708-11ea-9ccf-0242ac110012 to disappear Mar 15 22:02:05.720: INFO: Pod downwardapi-volume-9931bb1a-6708-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:02:05.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xs8c4" for this suite. Mar 15 22:02:13.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:02:13.878: INFO: namespace: e2e-tests-projected-xs8c4, resource: bindings, ignored listing per whitelist Mar 15 22:02:13.916: INFO: namespace e2e-tests-projected-xs8c4 deletion completed in 8.192930226s • [SLOW TEST:12.442 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:02:13.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-jdwrk Mar 15 22:02:20.082: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-jdwrk STEP: checking the pod's current state and verifying that restartCount is present Mar 15 22:02:20.085: INFO: Initial restart count of pod liveness-exec is 0 Mar 15 22:03:10.312: INFO: Restart count of pod e2e-tests-container-probe-jdwrk/liveness-exec is now 1 (50.226700547s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:03:10.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jdwrk" for this suite. Mar 15 22:03:16.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:03:16.373: INFO: namespace: e2e-tests-container-probe-jdwrk, resource: bindings, ignored listing per whitelist Mar 15 22:03:16.426: INFO: namespace e2e-tests-container-probe-jdwrk deletion completed in 6.095010575s • [SLOW TEST:62.509 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:03:16.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 15 22:03:16.531: INFO: Waiting up to 5m0s for pod "downward-api-c5dc8b5d-6708-11ea-9ccf-0242ac110012" in namespace "e2e-tests-downward-api-hcd7q" to be "success or failure" Mar 15 22:03:16.535: INFO: Pod "downward-api-c5dc8b5d-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030224ms Mar 15 22:03:18.542: INFO: Pod "downward-api-c5dc8b5d-6708-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011114774s Mar 15 22:03:20.566: INFO: Pod "downward-api-c5dc8b5d-6708-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035061082s STEP: Saw pod success Mar 15 22:03:20.566: INFO: Pod "downward-api-c5dc8b5d-6708-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 22:03:20.569: INFO: Trying to get logs from node hunter-worker pod downward-api-c5dc8b5d-6708-11ea-9ccf-0242ac110012 container dapi-container: STEP: delete the pod Mar 15 22:03:20.591: INFO: Waiting for pod downward-api-c5dc8b5d-6708-11ea-9ccf-0242ac110012 to disappear Mar 15 22:03:20.596: INFO: Pod downward-api-c5dc8b5d-6708-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:03:20.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hcd7q" for this suite. Mar 15 22:03:28.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:03:28.682: INFO: namespace: e2e-tests-downward-api-hcd7q, resource: bindings, ignored listing per whitelist Mar 15 22:03:28.871: INFO: namespace e2e-tests-downward-api-hcd7q deletion completed in 8.271675207s • [SLOW TEST:12.445 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:03:28.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:03:36.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-l52mr" for this suite. Mar 15 22:04:16.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:04:16.356: INFO: namespace: e2e-tests-kubelet-test-l52mr, resource: bindings, ignored listing per whitelist Mar 15 22:04:16.421: INFO: namespace e2e-tests-kubelet-test-l52mr deletion completed in 40.127886398s • [SLOW TEST:47.550 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:04:16.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 22:04:16.491: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:04:20.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-sw57s" for this suite. Mar 15 22:04:58.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:04:58.597: INFO: namespace: e2e-tests-pods-sw57s, resource: bindings, ignored listing per whitelist Mar 15 22:04:58.646: INFO: namespace e2e-tests-pods-sw57s deletion completed in 38.082019318s • [SLOW TEST:42.224 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:04:58.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 15 22:04:58.864: INFO: Waiting up to 5m0s for pod "pod-02d788fe-6709-11ea-9ccf-0242ac110012" in namespace "e2e-tests-emptydir-4kghr" to be "success or failure" Mar 15 22:04:58.886: INFO: Pod "pod-02d788fe-6709-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 22.655415ms Mar 15 22:05:00.890: INFO: Pod "pod-02d788fe-6709-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026572442s Mar 15 22:05:02.894: INFO: Pod "pod-02d788fe-6709-11ea-9ccf-0242ac110012": Phase="Running", Reason="", readiness=true. Elapsed: 4.030588343s Mar 15 22:05:04.898: INFO: Pod "pod-02d788fe-6709-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034738735s STEP: Saw pod success Mar 15 22:05:04.898: INFO: Pod "pod-02d788fe-6709-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 22:05:04.902: INFO: Trying to get logs from node hunter-worker pod pod-02d788fe-6709-11ea-9ccf-0242ac110012 container test-container: STEP: delete the pod Mar 15 22:05:04.925: INFO: Waiting for pod pod-02d788fe-6709-11ea-9ccf-0242ac110012 to disappear Mar 15 22:05:04.939: INFO: Pod pod-02d788fe-6709-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:05:04.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4kghr" for this suite. Mar 15 22:05:10.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:05:10.967: INFO: namespace: e2e-tests-emptydir-4kghr, resource: bindings, ignored listing per whitelist Mar 15 22:05:11.039: INFO: namespace e2e-tests-emptydir-4kghr deletion completed in 6.096512305s • [SLOW TEST:12.393 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:05:11.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Mar 15 22:05:11.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-n4rsb' Mar 15 22:05:17.106: INFO: stderr: "" Mar 15 22:05:17.106: INFO: stdout: "pod/pause created\n" Mar 15 22:05:17.106: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 15 22:05:17.106: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-n4rsb" to be "running and ready" Mar 15 22:05:17.117: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.272203ms Mar 15 22:05:19.129: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02273611s Mar 15 22:05:21.133: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.027399017s Mar 15 22:05:21.133: INFO: Pod "pause" satisfied condition "running and ready" Mar 15 22:05:21.133: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Mar 15 22:05:21.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-n4rsb' Mar 15 22:05:21.235: INFO: stderr: "" Mar 15 22:05:21.235: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 15 22:05:21.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-n4rsb' Mar 15 22:05:21.322: INFO: stderr: "" Mar 15 22:05:21.322: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 15 22:05:21.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-n4rsb' Mar 15 22:05:21.417: INFO: stderr: "" Mar 15 22:05:21.417: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 15 22:05:21.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-n4rsb' Mar 15 22:05:21.585: INFO: stderr: "" Mar 15 22:05:21.585: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Mar 15 22:05:21.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-n4rsb' Mar 15 22:05:21.931: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 22:05:21.931: INFO: stdout: "pod \"pause\" force deleted\n" Mar 15 22:05:21.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-n4rsb' Mar 15 22:05:22.895: INFO: stderr: "No resources found.\n" Mar 15 22:05:22.895: INFO: stdout: "" Mar 15 22:05:22.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-n4rsb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 15 22:05:23.189: INFO: stderr: "" Mar 15 22:05:23.189: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:05:23.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-n4rsb" for this suite. Mar 15 22:05:33.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:05:33.607: INFO: namespace: e2e-tests-kubectl-n4rsb, resource: bindings, ignored listing per whitelist Mar 15 22:05:33.626: INFO: namespace e2e-tests-kubectl-n4rsb deletion completed in 10.35318817s • [SLOW TEST:22.586 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:05:33.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 15 22:05:33.876: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:05:46.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-ltl4d" for this suite. Mar 15 22:05:56.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:05:56.642: INFO: namespace: e2e-tests-init-container-ltl4d, resource: bindings, ignored listing per whitelist Mar 15 22:05:56.672: INFO: namespace e2e-tests-init-container-ltl4d deletion completed in 10.539415199s • [SLOW TEST:23.046 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:05:56.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 15 22:05:57.106: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:06:09.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-vj4lj" for this suite. Mar 15 22:06:33.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:06:33.527: INFO: namespace: e2e-tests-init-container-vj4lj, resource: bindings, ignored listing per whitelist Mar 15 22:06:33.530: INFO: namespace e2e-tests-init-container-vj4lj deletion completed in 24.077251346s • [SLOW TEST:36.857 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:06:33.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 22:06:33.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b656659-6709-11ea-9ccf-0242ac110012" in namespace "e2e-tests-projected-nfhbt" to be "success or failure" Mar 15 22:06:33.765: INFO: Pod "downwardapi-volume-3b656659-6709-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 27.614163ms Mar 15 22:06:35.769: INFO: Pod "downwardapi-volume-3b656659-6709-11ea-9ccf-0242ac110012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031363614s Mar 15 22:06:37.773: INFO: Pod "downwardapi-volume-3b656659-6709-11ea-9ccf-0242ac110012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035512687s STEP: Saw pod success Mar 15 22:06:37.773: INFO: Pod "downwardapi-volume-3b656659-6709-11ea-9ccf-0242ac110012" satisfied condition "success or failure" Mar 15 22:06:37.776: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-3b656659-6709-11ea-9ccf-0242ac110012 container client-container: STEP: delete the pod Mar 15 22:06:37.873: INFO: Waiting for pod downwardapi-volume-3b656659-6709-11ea-9ccf-0242ac110012 to disappear Mar 15 22:06:37.982: INFO: Pod downwardapi-volume-3b656659-6709-11ea-9ccf-0242ac110012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:06:37.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nfhbt" for this suite. Mar 15 22:06:44.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:06:44.052: INFO: namespace: e2e-tests-projected-nfhbt, resource: bindings, ignored listing per whitelist Mar 15 22:06:44.079: INFO: namespace e2e-tests-projected-nfhbt deletion completed in 6.094372961s • [SLOW TEST:10.550 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SMar 15 22:06:44.079: INFO: Running AfterSuite actions on all nodes Mar 15 22:06:44.079: INFO: Running AfterSuite actions on node 1 Mar 15 22:06:44.079: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6605.824 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS